Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://num-meth.srcc.msu.ru/english/zhurnal/tom_2001/ps/art1_6.ps
Äàòà èçìåíåíèÿ: Mon Dec 16 17:36:26 2002
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 22:59:29 2012
Êîäèðîâêà: IBM-866
Numerical Methods and Programming, 2001, Vol. 2 65
UDC 517.98
CONDITIONS OF SOURCEWISE REPRESENTATION AND CONVERGENCE RATE OF
METHODS FOR SOLVING ILL--POSED OPERATOR EQUATIONS. PART II
A. B. Bakushinsky 1 and M. Yu. Kokurin 2
We outline recent results on the rate of convergence of iterative regularization methods for nonlinear
illíposed operator equations in Hilbert and Banach spaces. Special attention is paid to the necessity
of sourcewise representation conditions for the power estimates of the rate of convergence.
1. Introduction. This paper continues the survey [1] of author's recent results on stable methods for
solving illíposed operator equations. Let F : X 1 ! X 2 be a nonlinear operator, X 1 and X 2 be complex Banach
spaces. Throughout this part of the paper our interest will be focussed on nonlinear illíposed equations
F (x) = 0; x 2 X 1 (1:1)
Suppose (1.1) possesses a solution x \Lambda , which may be not unique. Let operator F (x) be twice GÓateaux differení
tiable in a neighborhood of the solution x \Lambda . We do not impose any assumptions on the existence of continuous
inverse of the linear operator F 0 (x) in a neighborhood of x \Lambda . Equations of this type often arise in the field
of mathematical modeling when solving various inverse problems in natural sciences (see [2 -- 7] for examples).
Numerous applications of such equations motivate a growing interest in numerical analysis of equations (1.1).
Under the above conditions, (1.1) appears to be an illíposed equation [2 -- 6], since, in general, the dependence
of the solution x \Lambda on small variations of the operator F is not continuous. In other words, a small perturbation
of F can result in considerable changes in x \Lambda or even in the transformation of the original equation into an iní
consistent problem. The outlined circumstances give rise to significant difficulties in practical solving of applied
illíposed problems of type (1.1) by traditional methods of computational mathematics. The needs of practical
illíposed problems have initiated the development of special regularization methods that allow us to obtain (by
specifying an approximate operator ~
F and a level of errors ffi ) an approximation of x \Lambda tending to x \Lambda as ffi ! 0.
Very often it is convenient to construct the regularization methods for equations (1.1) by the following
scheme (see [1, 5, 6]). Let F be a class of operators F : X 1 ! X 2 that contains both the exact and the
approximate (noisy) operators in (1.1). At the first stage it is assumed that the original operator F is given
without errors and a parametric family of mappings ! ff : F ! X 1 is constructed such that lim
ff!0
! ff (F ) =
x \Lambda 8F 2 F. The mapping ! ff transforms each operator F 2 F and a regularization parameter ff 2 (0; ff 0 ]
into an approximate solution x ff = ! ff (F ) for (1.1). At the second stage, instead of the exact operator F , an
approximate operator ~
F 2 F and a value ffi of estimate for the level of errors in a suitable metric are supposed
to be specified. Thereupon, a dependence ff = ff(ffi) called the rule of the choice of the regularization parameter
provides elements x ffi
ff(ffi) = ! ff(ffi) ( ~
F ) such that lim
ffi!0
x ffi
ff(ffi) = x \Lambda . By the last equality, the element x ffi
ff(ffi) could be
considered as a desired approximation for the exact solution x \Lambda adequate to a noisy operator ~
F . The algorithm
that transforms an approximation operator ~
F into the approximation ! ff(ffi) ( ~
F ) for the element x \Lambda is called a
regularization algorithm for (1.1).
A general approach to constructing the iterative methods for nonlinear operator equations provides the
linearization formalism: given a current iterative point xn , one can try to obtain the next iteration x = xn+1
as a solution to the linearized equation
F (xn ) + F 0 (xn )(x \Gamma xn ) = 0; x 2 X 1 (1:2)
When the operator F 0 (x n ) possesses the continuous inverse, problem (1.2) turns out to be wellíposed. Thereí
fore, the next iteration xn+1 can generally be obtained from (1.2) with the use of classical methods involving
finiteídimensional approximations of spaces and operators. In this case, (1.2) leads to the wellíknown Newton
1 Institute of System Analysis, Russian Academy of Sciences, 60th October Street 9, 117312 Moscow, Russian
Federation, eímail: bakush@isa.ru
2 Mari State University, Lenin Avenue 1, 424001 YoshkaríOla, Russian Federation, eímail: kokurin@marsu.ru
c
fl Research Computing Center, Moscow State University, 119899 Moscow, Russian Federation

66 Numerical Methods and Programming, 2001, Vol. 2
(Gauss--Newton) method. When R
\Gamma F 0 (x n )
\Delta is not X 2 , equation (1.2) appears to be illíposed and, hence, the
direct computation of xn+1 by discretization methods may lead to some difficulties. The applications of the
appropriate regularization procedures for approximate computations o xn+1 from (1.2) enables us to overcome
such difficulties. In the preceding works [8 -- 10], a similar approach has been developed for nonlinear equaí
tions (1.1) with an operator F acting from a Hilbert space X 1 into another Hilbert space X 2 . Applying the
regularization procedure from [6, Section 4.2; 11, Ch. 2, Section 3] to linear equation (1.2) and denoting the
resulting approximate solution by xn+1 , we come to the following class of iterative methods for equations (1.1)
in Hilbert spaces:
xn+1 = ¦ \Gamma \Theta
\Gamma F 0\Lambda (xn )F 0 (xn ); ff n
\Delta F 0\Lambda (xn )
\Gamma F (xn ) \Gamma F 0 (xn )(x n \Gamma ¦)
\Delta (1:3)
In (1.3), ¦ 2 X is an initial guess for xn+1 and the regularization parameter ff n ? 0 controls an accuracy when
solving linearized equation (1.2). In other words, if (1.2) possesses a nonempty solution set, then, with xn fixed,
the rightíhand side of (1.3) converges to a solution of (1.2) as ff n ! 0. The function of operators in (1.3) can
formally be defined by using the spectral decomposition of the selfadjoint operator F 0 (x n ) \Lambda F 0 (xn ) [12, Ch. VII].
Specifying the generating function \Theta(Ö; ff) (Ö 2 C; ff ? 0) in (1.3) with nonrestrictive additional conditions, we
obtain particular iterative algorithms for (1.1) (see [6, 8 -- 10]). An extension of the aboveímentioned procedure
from [6, Section 4.2; 11, Ch. 2, Section 3] to linear operator equations in Banach spaces X 1 = X 2 = X has
been derived in [6, Section 4.5]. In a slightly generalized form, the algorithm from [6, Section 4.5] when applied
to (1.2) results in the family of iterative methods for (1.1)
xn+1 = ¦ \Gamma \Theta
\Gamma F 0 (x n ); ff n
\Delta \Gamma F (x n ) \Gamma F 0 (xn )(x n \Gamma ¦)
\Delta (1:4)
with ¦ 2 X and ff n ? 0 as in (1.3). For functions of operators in Banach spaces we use Riesz--Dunford
functional calculus [12, Ch. XI]. Let oe(A) be the spectrum and ae(A) = Cnoe(A) be the resolvent set of the
operator A 2 L(X). Let E be the identity operator in X. Denote by R(Ö; A) = (ÖE \Gamma A) \Gamma1 the resolvent
operator of A at Ö 2 C. Suppose '(Ö) is an analytic function in the spectral variable Ö on an open set S oe oe(A).
Then, the function '(A) of the operator A can be defined by the Riesz--Dunford formula
'(A) = 1
2‹i
Z
fl
'(Ö)R(Ö; A) dÖ (1:5)
where fl ae S is a positively oriented contour such that fl surrounds the spectrum oe(A).
In this paper, we study classes of solutions x \Lambda for (1.1) such that procedures (1.3) and (1.4) converge with
the power estimates
kxn \Gamma x \Lambda kX1 6 c ff p
n ; c; p ? 0 (1:6)
In what follows, k \Delta kX indicates a norm in a Banach space X; R(A) = fy 2 X 2 : y = Ax; x 2 X 1 g is the range
of an operator A 2 L(X 1 ; X 2 ); A \Lambda is the conjugate operator for A. The way to formalize requirements imposed
on a solution x \Lambda that guarantee estimate (1.6) is to impose the sourcewise representation conditions of the form
x \Lambda \Gamma ¦ 2 R
i \Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
j
(1:7)
in the case of Hilbert spaces X 1 ; X 2 or
x \Lambda \Gamma ¦ 2 R
\Gamma
F 0 (x \Lambda ) p
\Delta (1:8)
when X 1 = X 2 = X are Banach spaces. The main result on iterations (1.3) and (1.4) is that conditions (1.7)
and (1.8) are sufficient for (1.6) and are ``very close'' to the necessary ones for this estimate. In other words,
relations (1.7) and (1.8) yield a highly accurate description of the class of possible solutions for (1.1) such that
power estimate (1.6) holds.
The paper is organized as follows. In Section 2 we present a unified approach to studying the convergence
of the class of regularization methods (1.3). Specifically, it is shown that source condition (1.7) is sufficient for
these methods to converge at a power rate. In Section 3 we prove that representation (1.7) is actually very close
to be necessary for (1.6). These results have been obtained in the cooperation with N. Yusoupova. In Section 4
we present an extension of scheme (1.3) for the case of Banach spaces X 1 = X 2 . The main result of this section
is that the sourcewise representation (1.8) appears to be sufficient for iterations (1.4) to converge with estimate
(1.6). Finally, in Section 5 it is shown that (1.8) is also near the necessary condition for (1.6). In Section 6 the
previous results are applied to a number of regularization procedures for Banach spaces.

Numerical Methods and Programming, 2001, Vol. 2 67
This survey is based on the results partially published in [8 -- 10, 20 -- 25].
2. Estimates for the rate of convergence of iterative methods for nonlinear equations in
Hilbert spaces. In this section we present recent results on the convergence of iterative methods (1.3) for
solving nonlinear equations (1.1). It is assumed that the operator F acts from a Hilbert space X 1 into a Hilbert
space X 2 , where X 1 and X 2 are complex Hilbert spaces. Suppose GÓateaux derivatives F 0 (x) and F 00 (x) exist
and satisfy
kF 0 (x)k L(X1;X2 ) 6 N 1 ; kF 00 (x)k L(X1;L(X1;X2 )) 6 N 2 8x
2\Omega R
\Omega R =
n
x 2 X 1 : kx \Gamma x \Lambda kX1 6 R
o
; R ? 0 (2:1)
Assume that x 0
2\Omega R and fxng are generated by the iterative process
xn+1 = ¦ \Gamma \Theta
\Gamma F 0\Lambda (xn )F 0 (xn ); ff n
\Delta F 0\Lambda (xn )
\Gamma F (xn ) \Gamma F 0 (xn )(x n \Gamma ¦)
\Delta (2:2)
Let the complexívalued function \Theta(Ö; ff) be analytic in Ö on a domain D ff ae C, where D ff oe [0; N 2
1 ] for all
ff 2 (0; ff 0 ].
Let the sequence of regularization parameters fff ng in (2.2) satisfy the following condition:
Assumption 2.1.
0 ! ff n+1 6 ff n ; n = 0; 1; : : :; lim
n!1
ff n = 0; sup
n
ff n
ff n+1
! 1
We denote r = sup
n
ff n
ff n+1
.
Remark 2.1. As an example of the sequence fff ng that satisfies Assumption 2.1, we can consider ff n =
ff 0 (n + 1) \Gammaa with ff 0 ; a ? 0.
We also suppose that source condition (1.7) holds with an exponent p ? 1=2. Therefore, there exists an
element v 2 X 1 such that
x \Lambda \Gamma ¦ =
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
v; p ? 1=2 (2:3)
First, we consider the case when the original operator F in (1.1) is given without errors. The aim of our
examinations is to prove (under condition (2.3) and the appropriate additional assumptions) that the power
estimate
kxn \Gamma x \Lambda kX1 6 l ff p
n ; n = 0; 1; : : : (2:4)
holds provided that (2.3) is fulfilled.
Let us impose the next further assumptions on \Theta(Ö; ff).
Assumption 2.2. There is C 0 ? 0 such that for all ff 2 (0; ff 0 ]
sup
Ö2[0;N 2
1 ]
fi fi fi\Theta(Ö; ff)
p
Ö
fi fi fi 6
C 0
p
ff
Assumption 2.3. For each p 2 [0; p 0 ] with p 0 ? 1=2 (p 2 [0; 1) if p 0 = 1) the following inequality
holds:
sup
Ö2[0;N 2
1 ]
fi fi fi\Theta(Ö; ff)Ö \Gamma 1
fi fi fi Ö p 6 g(p)ff p 8ff 2 (0; ff 0 ]
where g(p) is a nondecreasing function in p ? 0.
Let (2.3) be fulfilled with p 2 [1=2; p 0 ]. We suppose now that xn
2\Omega R and estimate from above the value
of kxn+1 \Gamma x \Lambda kX1 . Using (2.3) as in [10], we obtain from (2.2) that
xn+1 \Gamma x \Lambda = \Gamma \Theta
\Gamma
F 0\Lambda (xn )F 0 (xn ); ff n
\Delta
F 0\Lambda (xn )G(xn )
\Gamma
i
E \Gamma \Theta
\Gamma F 0\Lambda (xn )F 0 (x n ); ff n
\Delta F 0\Lambda (x n )F 0 (xn )
j
(x \Lambda \Gamma ¦)
= \Gamma \Theta
\Gamma F 0\Lambda (xn )F 0 (xn ); ff n
\Delta F 0\Lambda (xn )G(xn )
\Gamma
i
E \Gamma \Theta
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
j \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
v
\Gamma
i
\Theta
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Gamma \Theta
\Gamma
F 0\Lambda (xn )F 0 (xn ); ff n
\Delta
F 0\Lambda (xn )F 0 (xn )
j \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
v
(2:5)

68 Numerical Methods and Programming, 2001, Vol. 2
where
G(xn ) = F (xn ) + F 0 (xn )(x \Lambda \Gamma xn )
It follows from (2.1) that
kG(xn )k X2 6 N 2 kxn \Gamma x \Lambda k 2
X1 (2:6)
Combining (2.5) and (2.6), with the use of Assumptions 2.2 and 2.3 we get
kxn+1 \Gamma x \Lambda kX1 6
C 0 N 2
p
ff n
kxn \Gamma x \Lambda k 2
X1 + g(p) ff p
n kvkX1
+ k
i
\Theta
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Gamma \Theta
\Gamma
F 0\Lambda (xn )F 0 (xn ); ff n
\Delta
F 0\Lambda (x n )F 0 (xn )
j \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
vkX1
(2:7)
In order to obtain an upper estimate for the last norm in the rightíhand side of (2.7), we define a family of
positively oriented contours \Gamma ff ; ff 2 (0; ff 0 ] on the complex plane C such that \Gamma ff ae D ff and \Gamma ff surrounds the
segment [0; N 2
1 ] of the real line. Suppose the family f\Gamma ff g ff2(0;ff 0 ] satisfies the following assumption.
Assumption 2.4.
sup
ff2(0;ff 0 ]
sup
Ö2\Gamma ff
jÖj ! 1 (2:8)
sup
ff2(0;ff 0 ]
sup
Ö2\Gamma ff ;ï2[0;N 2
1 ]
jÖj + ï
jÖ \Gamma ïj
! 1 (2:9)
Denote the constants in the leftíhand sides of (2.8) and (2.9) by M 0 and M 1 , respectively. Using (1.5), one
obtains
fl fl i
\Theta
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta
F 0\Lambda (x \Lambda )F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0\Lambda (xn )F 0 (xn ); ff n
\Delta
F 0\Lambda (xn )F 0 (x n )
j \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
v
fl fl X1
6
1
2‹
Z
\Gamma ff n
j\Theta(Ö; ff)Ö \Gamma 1j k(R
\Gamma Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta
\Gamma R
\Gamma Ö; F 0\Lambda (xn )F 0 (x n )))(F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
vkX1 jdÖj (2:10)
Note that fl fl i
R
\Gamma
Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta
\Gamma R
\Gamma
Ö; F 0\Lambda (x n )F 0 (xn )
\Delta j \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p fl fl L(X1;X1 )
6 kR
\Gamma
Ö; F 0\Lambda (xn )F 0 (xn ))F 0\Lambda (xn )k L(X2;X1 ) kF 0 (x \Lambda ) \Gamma F 0 (xn )k L(X1;X2 )
\ThetakR
\Gamma
Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta
(F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
k L(X1;X1 )
+kR
\Gamma Ö; F 0\Lambda (xn )F 0 (xn ))k L(X1;X1 ) kF 0\Lambda (x \Lambda ) \Gamma F 0\Lambda (xn )k L(X2;X1 )
\ThetakF 0 (x \Lambda )R
\Gamma
Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
k L(X1;X2 )
(2:11)
Since x \Lambda ; xn 2 \Omega\Gamma from (2.1) we get
fl fl F 0\Lambda (x \Lambda ) \Gamma F 0\Lambda (xn )
fl fl L(X2;X1 ) =
fl fl F 0 (x \Lambda ) \Gamma F 0 (x n )
fl fl L(X1;X2 ) 6 N 2 kxn \Gamma x \Lambda kX1 (2:12)
Using the spectral decomposition technique, it is not difficult to prove that for each s ? 0
kR
\Gamma
Ö; F 0\Lambda (x)F 0 (x)
\Delta\Gamma
F 0\Lambda (x)F 0 (x)
\Delta s
k L(X1;X1 ) 6
M (s)
jÖj 1\Gammaminfs;1g
(2:13)
kF 0 (x)R
\Gamma Ö; F 0\Lambda (x)F 0 (x))(F 0\Lambda (x)F 0 (x)) s k L(X1;X2 ) 6
M (s)
jÖj 1\Gammaminfs+1=2;1g
8Ö 2 \Gamma ff ; ff 2 (0; ff 0 ]; x
2\Omega R (2:14)
where M (s) = M 1 maxfL(s); L(s + 1=2)g and
L(s) =
8 ? !
? :
1 if s = 0
s s (1 \Gamma s) 1\Gammas if s 2 (0; 1)
N 2s\Gamma2
1 if s 2 [1; 1)

Numerical Methods and Programming, 2001, Vol. 2 69
Letting s = 1=2; s = p; s = 0 in (2.13) and s = p in (2.14), for all Ö 2 \Gamma ff ; ff 2 (0; ff 0 ] we obtain
fl fl R
\Gamma Ö; F 0\Lambda (x n )F 0 (x n )
\Delta F 0\Lambda (x n )
fl fl L(X2;X1 )
= kF 0 (x n )R
\Gamma ï Ö; F 0\Lambda (x n )F 0 (x n )
\Delta k L(X1;X2 )
= kR
\Gamma
Ö; F 0\Lambda (xn )F 0 (xn ))(F 0\Lambda (xn )F 0 (xn )) 1=2 k L(X1;X1 )
6
c 0
p
jÖj
(2:15)
kR
\Gamma
Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta
(F 0\Lambda (x \Lambda )F 0 (x \Lambda )) p k L(X1;X1 ) 6
M (p)
jÖj 1\Gammaminfp;1g (2:16)
kR
\Gamma
Ö; F 0\Lambda (xn )F 0 (xn )
\Delta
k L(X1;X1 ) 6
c 1
jÖj (2:17)
kF 0 (x \Lambda )R
\Gamma Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta (F 0\Lambda (x \Lambda )F 0 (x \Lambda )) p k L(X1;X2 ) 6 M (p) (2:18)
In what follows, c 0 ; c 1 ; : : : indicate positive constants and ï
z is the complex conjugate for z 2 C.
By (2.8), (2.12) and (2.15) -- (2.18), from (2.11) we get
fl fl i
R
\Gamma
Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta
\Gamma R
\Gamma
Ö; F 0\Lambda (x n )F 0 (xn )
\Delta j \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p fl fl L(X1;X1 )
6 c 2 M (p)kx n \Gamma x \Lambda kX1
i 1
jÖj 3=2\Gammaminfp;1g
+ 1
jÖj
j
6
c 2 M (p)N (p)kx n \Gamma x \Lambda kX1
jÖj
where
N (p) =
(p M 0 + 1 if p ? 1
M p\Gamma1=2
0 + 1 if p 2 [1=2; 1)
Therefore, (2.10) and (2.11) yield
k
i
\Theta
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta
F 0\Lambda (x \Lambda )F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0\Lambda (x n )F 0 (x n ); ff n
\Delta
F 0\Lambda (x n )F 0 (xn )
j \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p
vkX1
6 c 3 M (p)N (p)kvk X1 kxn \Gamma x \Lambda kX1
Z
\Gamma ff n
j\Theta(Ö; ff n )Ö \Gamma 1j
jÖj jdÖj (2:19)
Now suppose that the generating functions \Theta(Ö; ff) satisfy the additional condition.
Assumption 2.5.
sup
ff2(0;ff 0 ]
Z
\Gamma ff
j\Theta(Ö; ff)Ö \Gamma 1j
jÖj jdÖj ! 1
Assumption 2.5 implies that
Z
\Gamma ff n
j\Theta(Ö; ff n )Ö \Gamma 1j
jÖj jdÖj 6 c 4 ; n = 0; 1; : : :
Therefore, by (2.19)
k
i
\Theta
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta
F 0\Lambda (x \Lambda )F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0\Lambda (xn )F 0 (xn ); ff n
\Delta
F 0\Lambda (xn )F 0 (x n )
j
(F 0\Lambda (x \Lambda )F 0 (x \Lambda )) p vkX1
6 c 5 M (p)N (p) kvkX1 kxn \Gamma x \Lambda kX1 (2:20)
Combining (2.7) and (2.20), we finally get
kxn+1 \Gamma x \Lambda kX1 6
C 0 N 2
p ff n
kxn \Gamma x \Lambda k 2
X1 + g(p)ff p
n kvkX1 + c 5 M (p)N (p) kvkX1 kxn \Gamma x \Lambda kX1 (2:21)
From (2.21) by induction we come to the following assertion.
Theorem 2.1. Let Assumptions 2.1 -- 2.5 be fulfilled. Assume that representation (2.3) holds with p 2
[1=2; p 0 ] and
kx 0 \Gamma x \Lambda kX1 6 m(p; kvkX1 )ff p
0 ; ff 0 6 (Rl(p) \Gamma1 ) 1=p

70 Numerical Methods and Programming, 2001, Vol. 2
where
m(p; kvkX1 ) = 2g(p)r p kvkX1
1 \Gamma c 6 M (p)N (p)r p kvkX1 ; kvkX1 6 d(p)
d(p) = min
n 1
d 1 (p) + c 7
;
d 1 (p) + d 2 (p) \Gamma
q
2d 1 (p)d 2 (p) + d 2 (p) 2
d 1 (p) 2
o
(2:22)
c 7 ? 0; d 1 (p) = c 6 M (p) N (p) r p ; d 2 (p) = c 6 N 2 r 2p sup
n
fff ng (2:23)
Then, (2.4) holds with
l = 2g(p) r p
c 6
(2:24)
Remark 2.2. By (2.22) -- (2.24), the constants d(p) and l = l(p) are separated from zero when p ? 1=2
varies on each bounded interval.
Let us turn to the case when the original operator F in (1.1) is given approximately. To be more exact
we assume that instead of F an approximation ~
F is known such that ~
F is twice GÓateaux differentiable and the
derivatives ~
F 0 (x) and ~
F 00 (x) satisfy (2.1). Assume also that k ~
F (x \Lambda )k X2 6 ffi . In this case, for an initial guess
x 0
2\Omega R we construct the sequence of approximate solutions fxng as
xn+1 = ¦ \Gamma \Theta
\Gamma ~
F 0\Lambda (xn ) ~
F 0 (xn ); ff n
\Delta ~
F 0\Lambda (xn )
\Gamma ~
F (xn ) \Gamma ~
F 0 (xn )(x n \Gamma ¦)
\Delta (2:25)
Since the original operator F is not known, it is natural to assume that source condition (2.3) is satisfied
approximately. Hence, we suppose that the initial error x \Lambda \Gamma ¦ is of the form
x \Lambda \Gamma ¦ =
\Gamma ~
F 0\Lambda (x \Lambda ) ~
F 0 (x \Lambda )
\Delta p
~
v + ~
w (2:26)
where ~
v; ~
w 2 X 1 and k ~
wkX1 6 \Delta. Using the scheme presented above, we come to the following statement on
the behavior of iterations (2.25).
Theorem 2.2. Let Assumptions 2.1 -- 2.5 be fulfilled. Assume that representation (2.26) holds with p 2
[1=2; p 0 ]. Then, for each d 0 ? 0 there exist positive constants ~
ff(p); ~ l(p); ~
m(p; kvkX1 ) and ~
d(p) such that
kxn \Gamma x \Lambda kX1 6 ~ l(p) ff p
n (2:27)
for all numbers n 6 N (ffi; \Delta), where
N (ffi; \Delta) = max
n
n : maxfffi; \Deltag
ff p+1=2
n
6 d 0
o
(2:28)
provided that ff 0 , kx 0 \Gamma x \Lambda kX1 and k~vk X1 are small enough such that
ff 0 6 ~
ff(p); kx 0 \Gamma x \Lambda kX1 6 ~
m(p; kvkX1 )ff p
0 ; k~vk X1 6 ~
d(p)
Corollary 2.1. Let the hypotheses of Theorem 2.2 be fulfilled. Then,
kx N(ffi;\Delta) \Gamma x \Lambda kX1 6 r p ~ l(p)
i maxfffi; \Deltag
d 0
j 2p
2p+1
(2:29)
Inequality (2.29) follows immediately from (2.27) and (2.28). From (2.29) we have
lim
(ffi;\Delta)!0
kx N(ffi;\Delta) \Gamma x \Lambda kX1 = 0
Therefore, the operator ! ff(ffi;\Delta) that maps the approximate operator ~
F into the element xN(ffi;\Delta) defines the
regularization algorithm for equation (1.1), provided that the hypotheses of Theorem 2.2 are fulfilled.
In conclusion of this section we present a number of generating functions \Theta(Ö; ff) for which Assumpí
tions 2.2, 2.3, and 2.5 are fulfilled. When analyzing Assumption 2.5, we define the contours \Gamma ff ; ff 2 (0; ff 0 ] as
follows (see [6, p. 119]):
\Gamma ff = \Gamma (1)
ff [ \Gamma (2)+
ff [ \Gamma (2)\Gamma
ff [ \Gamma (3)
ff (2:30)

Numerical Methods and Programming, 2001, Vol. 2 71
with
\Gamma (1)
ff = fÖ : jÖj = ff=2; jargÖj ? ' 0 g; ' 0 2 (0; ‹=2)
\Gamma (2)\Sigma
ff = fÖ : Ö = ae exp (\Sigmai' 0 ); ff=2 6 ae 6 ae 0 g; ae 0 ? N 2
1
\Gamma (3)
ff = fÖ : jÖj = ae 0 ; jargÖj 6 ' 0 g
For the family f\Gamma ff g ff2(0;ff 0 ] defined by (2.30), inequality (2.8) is immediate, inequality (2.9) follows from the
cosine theorem for the triangle OÖï. Therefore, Assumption 2.4 is fulfilled. Let us turn now to examples of
the generating functions (cf. Examples 2.1, 2.2, 2.4 and 2.5 from [1]). Direct calculations prove that all of the
following functions \Theta(Ö; ff) satisfy Assumption 2.5 with family (2.30).
Example 2.1. Consider the generating family of Lavrent'ev's method
\Theta(Ö; ff) = 1
Ö + ff ; ff 2 (0; ff 0 ] (2:31)
In this case, the main iterative process (2.2) can be written in the form:
\Gamma
F 0\Lambda (xn )F 0 (xn ) + ff nE
\Delta
(xn+1 \Gamma ¦) = F 0\Lambda (x n )
\Gamma
F 0 (xn )(x n \Gamma ¦) \Gamma F (xn )
\Delta
Recall that Assumption 2.3 holds with p 0 = 1 [1].
Example 2.2. For N = 1, function (2.31) is contained in the following family of generating functions:
\Theta(Ö; ff) = 1
Ö
i
1 \Gamma
` ff
Ö + ff
' N j
; N = 1; 2; : : : (2:32)
We recall that function (2.32) is generating for the soícalled iterated Lavrent'ev method for linear illíposed
operator equations. An iteration of method (2.2), (2.32) can be presented as a finite iterative process: xn+1 =
x (N)
n+1 with x (0)
n+1 = ¦ and fx (k)
n+1 g defined by the linear wellíposed equations
(F 0\Lambda (xn )F 0 (xn ) + ff nE)x
(k+1)
n+1 = ff nx
(k)
n+1 + F 0\Lambda (x n )(F 0 (x n )x n \Gamma F (xn )); k = 0; 1; : : : ; N \Gamma 1
In this case, Assumption 2.3 holds with p 0 = N .
Example 2.3. Consider the function
\Theta(Ö; ff) =
8 !
:
1
Ö
\Gamma
1 \Gamma e \Gamma Ö
ff
\Delta
; Ö 6= 0
1
ff ; Ö = 0
(2:33)
analytic on the whole complex plane. An iteration of method (2.2), (2.33) can practically be implemented as
xn+1 = u(ff \Gamma1
n ), where u = u(t) is a solution of the Cauchy problem
du(t)
dt
+ F 0\Lambda (xn )F 0 (xn )u(t) = F 0\Lambda (xn )(F 0 (x n )x n \Gamma F (xn )); u(0) = ¦
Example 2.4. Let
\Theta(Ö; ff) =
8 !
:
1
Ö
i
1 \Gamma (1 \Gamma ï 0 Ö) 1
ff
j
; Ö 6= 0
ï0
ff ; Ö = 0
(2:34)
with ï 0 ? 0. The regularization parameter ff ? 0 is assumed to take the values 1; 1
2 ; 1
3 ; : : :. Note that \Theta(Ö; ff)
is analytic on C. An iteration of method (2.2), (2.34) takes the form xn+1 = x (n)
n+1 with x (0)
n+1 = ¦ and fx (k)
n+1 g
constructed iteratively:
x (k+1)
n+1 =
\Gamma E \Gamma ï 0 F 0\Lambda (xn )F 0 (xn )
\Delta x (k)
n+1 \Gamma F 0\Lambda (xn )
\Gamma F 0 (xn )x n + F (xn )
\Delta ; k = 0; 1; : : : ; n \Gamma 1
It is well known that functions (2.33) and (2.34) satisfy Assumption 2.3 with p 0 = 1.
3. Necessary conditions for the convergence of iterative methods for solving nonlinear operí
ator equations in Hilbert spaces. In this section, under the appropriate conditions on generating function
\Theta(Ö; ff), we prove that source representation (2.3) is almost necessary for estimate (2.4). More exactly, it will
be shown that estimate
kxn \Gamma x \Lambda kX1 6 C 1 ff p
n ; n = 0; 1; : : : ; C 1 ? 0; p ? 1=2 (3:1)

72 Numerical Methods and Programming, 2001, Vol. 2
implies x \Lambda \Gamma ¦ 2 R
i \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p\Gamma''
j
for each '' 2 (0; p) (cf. Theorems 3.1 and 3.2 from [1]).
Thus, assume that (3.1) is fulfilled. Suppose also that (2.1) and Assumptions 2.1, 2.2 and 2.4 are satisfied.
Without loss of generality, we may assume that C 1 ff p
0 6 R and, hence, xn
2\Omega R ; n = 0; 1; : : :. From (2.5), (2.6),
and (3.1) we obtain
k
i
E \Gamma \Theta
\Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta F 0\Lambda (x \Lambda )F 0 (x \Lambda )
j
(x \Lambda \Gamma ¦)k X1
6 l ff p
n+1 + C 0 C 2
1 N 2 ff 2p\Gamma1=2
n
+k
i
\Theta
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta
F 0\Lambda (x \Lambda )F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0\Lambda (x n )F 0 (x n ); ff n
\Delta
F 0\Lambda (x n )F 0 (x n )
j
(x \Lambda \Gamma ¦)k X1
6 c 8 ff p
n + c 9 k\Theta
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta
F 0\Lambda (x \Lambda )F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0\Lambda (xn )F 0 (xn ); ff n
\Delta
F 0\Lambda (xn )F 0 (xn )k L(X1;X1 )
(3:2)
By (2.10) -- (2.12) (with p = 0), (2.15) and (2.17), for the last norm in (3.2) we get
k\Theta
\Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta F 0\Lambda (x \Lambda )F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0\Lambda (x n )F 0 (xn ); ff n
\Delta F 0\Lambda (xn )F 0 (xn )k L(X1;X1 )
6 c 10 ff p
n
Z
\Gamma ff n
j\Theta(Ö; ff n )Ö \Gamma 1j
jÖj 3=2
jdÖj; n = 0; 1; : : : (3:3)
To continue examinations, we need the following generalization of Assumption 2.5.
Assumption 3.1. For each s 2 (0; 3=2]
sup
ff2(0;ff 0 ]
ff s\Gamma1
Z
\Gamma ff
j\Theta(Ö; ff)Ö \Gamma 1j
jÖj s jdÖj ! 1 (3:4)
Hence, from (3.3) and (3.4) we get
k\Theta
\Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta F 0\Lambda (x \Lambda )F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0\Lambda (xn )F 0 (x n ); ff n
\Delta F 0\Lambda (xn )F 0 (x n )k L(X1;X1 ) 6 c 11 ff p\Gamma1=2
n (3:5)
Given ff 2 (0; ff 0 ], denote
\Phi(ff) = k
i
E \Gamma \Theta
\Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff
\Delta F 0\Lambda (x \Lambda )F 0 (x \Lambda )
j
(x \Lambda \Gamma ¦)k X1 (3:6)
By (3.2), (3.5), and (3.6) we have
\Phi(ff n ) 6 c 12 ff p\Gamma1=2
n (3:7)
We continue our analysis of estimate (3.7) under the following condition.
Assumption 3.2. For each Ö 2 [0; N 2
1 ] the function j(Ö) = j\Theta(Ö; ff)Ö \Gamma 1j is nondecreasing when ff 2
(0; ff 0 ].
Let EÖ ; Ö 2 [0; N 2
1 ] be the family of spectral projectors for the operator F 0\Lambda (x \Lambda )F 0 (x \Lambda ). From Assumption
3.2 it follows that
\Phi(ff) =
i Z
[0;N 2
1 ]
j(Ö) 2 dkEÖ (x \Lambda \Gamma ¦)k 2
X1
j 1=2
is a nondecreasing function when ff 2 (0; ff 0 ]. By (3.7), for all ff 2 (ff n+1 ; ff n ]; n = 0; 1; : : :
\Phi(ff)
ff p\Gamma1=2 6
\Phi(ff n )
ff p\Gamma1=2 6
c 12 ff p\Gamma1=2
n
ff p\Gamma1=2
n
6 c 12 r p\Gamma1=2 ! 1
Consequently,
\Phi(ff) 6 c 12 r p\Gamma1=2 ff p\Gamma1=2 8ff 2 (0; ff 0 ] (3:8)
Theorem 3.1 from [1] now implies the inclusion
x \Lambda \Gamma ¦ 2 R
i \Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p\Gamma1=2\Gamma'' 1
j
8'' 1 2 (0; p \Gamma 1=2] (3:9)
provided that Assumption 3.1 of [1] holds. Note that the same result could be obtained using Assumption 2.2
of this paper instead of Assumption 3.1 in [1] (see [7, p. 82] for details). Therefore, to justify (3.9), we refer to

Numerical Methods and Programming, 2001, Vol. 2 73
Assumption 2.2, which was already used in our previous examinations. Denote p 1 = p \Gamma 1=2 \Gamma '' 1 . Due to (3.9),
there exists an element v (0) 2 X 1 such that
x \Lambda \Gamma ¦ =
\Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p1 v (0) (3:10)
Then, representation (3.1) can be used to estimate \Phi(ff n ). Substituting (3.10) into (3.2), we get
\Phi(ff n ) = k
i
E \Gamma \Theta
\Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda ); ff n
\Delta F 0\Lambda (x \Lambda )F 0 (x \Lambda )
j
(x \Lambda \Gamma ¦)k X1 (3:11)
6 c 13
i
ff p
n +
Z
\Gamma ff n
j\Theta(Ö; ff n )Ö \Gamma 1j k(R
\Gamma Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta \Gamma R
\Gamma Ö; F 0\Lambda (x n )F 0 (x n )))(F 0\Lambda (x \Lambda )F 0 (x \Lambda )) p1 k L(X1;X1 ) jdÖj
j
Then, similarly to (2.11) we obtain
k(R
\Gamma Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )) \Gamma R
\Gamma Ö; F 0\Lambda (xn )F 0 (x n )))(F 0\Lambda (x \Lambda )F 0 (x \Lambda )) p1 k L(X1;X1 )
6 kR
\Gamma
Ö; F 0\Lambda (xn )F 0 (xn ))F 0\Lambda (xn )k L(X2;X1 ) kF 0 (x \Lambda ) \Gamma F 0 (x n )k L(X1;X2 )
\ThetakR
\Gamma
Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta
(F 0\Lambda (x \Lambda )F 0 (x \Lambda )) p1 k L(X1;X1 )
+kR
\Gamma Ö; F 0\Lambda (xn )F 0 (xn ))k L(X1;X1 ) kF 0\Lambda (x \Lambda ) \Gamma F 0\Lambda (x n )k L(X2;X1 )
\ThetakF 0 (x \Lambda )R
\Gamma
Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta (F 0\Lambda (x \Lambda )F 0 (x \Lambda )) p1 k L(X1;X2 )
(3:12)
By (2.1) and (3.1),
kF 0\Lambda (x \Lambda ) \Gamma F 0\Lambda (xn )k L(X2;X1 ) = kF 0 (x \Lambda ) \Gamma F 0 (xn )k L(X1;X2 ) 6 c 14 ff p
n (3:13)
Letting s = p 1 in (2.13) and (2.14), we get
kR
\Gamma
Ö; F 0\Lambda (x)F 0 (x))(F 0\Lambda (x)F 0 (x)) p1 k L(X1;X1 ) 6
c 14
jÖj 1\Gammaminfp 1 ;1g (3:14)
kF 0 (x)R
\Gamma
Ö; F 0\Lambda (x)F 0 (x))(F 0\Lambda (x)F 0 (x)) p1 k L(X1;X2 ) 6
c 15
jÖj 1\Gammaminfp 1 +1=2;1g 8Ö 2 \Gamma ff n (3:15)
From (3.12)--(3.15) it follows
k
i
R
\Gamma
Ö; F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta
\Gamma R
\Gamma
Ö; F 0\Lambda (xn )F 0 (xn )
\Delta j
(F 0\Lambda (x \Lambda )F 0 (x \Lambda )) p1 k L(X1;X1 ) 6
c 16 ff p
n
jÖj 2\Gammaminfp 1 +1=2;1g
Hence, from (3.11) for n = 0; 1; : : : we obtain
\Phi(ff n ) 6 c 17 ff p
n
i
1 +
Z
\Gamma ff n
j\Theta(Ö; ff n )Ö \Gamma 1j
jÖj 2\Gammaminfp 1 +1=2;1g jdÖj
j
(3:16)
Let us now analyze the two possible cases. If p 1 ? 1=2, then Assumption 3.1 and (3.16) yield \Phi(ff n ) 6 c 18 ff p
n .
Hence,
x \Lambda \Gamma ¦ =
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p\Gamma'' 1 v (1) ; v (1) 2 X 1 (3:17)
for all '' 1 2 (0; p) provided that Assumption 2.2 is fulfilled (see (3.9) and (3.10)).
Suppose p 1 2 (0; 1=2). In this case, by (3.16) we have
\Phi(ff n ) 6 c 19 ff p\Gamma(1=2\Gammap 1 )
n (3:18)
Consequently, there exists v (2) 2 X 1 such that
x \Lambda \Gamma ¦ =
\Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p\Gamma(1=2\Gammap 1 )\Gamma'' 2
v (2) (3:19)
for all '' 2 2 (0; p \Gamma (1=2 \Gamma p 1 )). Representation (3.19) can be used to improve estimate (3.18) by substitution
of (3.19) for (3.2). Repeating the process of iterative estimates for \Phi(ff n ), we construct positive sequences
p k ; '' k ; k = 1; 2; : : :, where
p k+1 = p \Gamma (1=2 \Gamma p k ) \Gamma '' k+1 ; '' k+1 2 (0; p \Gamma (1=2 \Gamma p k )) (3:20)

74 Numerical Methods and Programming, 2001, Vol. 2
and
x \Lambda \Gamma ¦ =
\Gamma F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta pk+1 v (k+1) ; k = 1; 2; : : :
Suppose f'' k g are chosen in such a manner that
lim
k!1
'' k = 0; sup
k
'' k ! p \Gamma 1=2 (3:21)
If at kth step we get p k ? 1=2, then representation (3.17) holds. Let us prove that the inequality p k ? 1=2
certainly fulfills at a finite step k = k 0 . Assume the contrary, i.e., p k ! p for all numbers k = 1; 2; : : :. Since
p ? 1=2, from (3.20) and (3.21) we conclude that p k+1 ? p k . Therefore, the bounded sequence fp k g possesses
a limit: lim
k!1
p k = ~
p 6 p. Passing to the limit in both parts of equality (3.20), we come to the contradictory
relation p = 1=2 (see (3.1)). Thus, we have proved the following statement.
Theorem 3.1. Let Assumptions 2.1, 2.2, 2.4, 3.1 and 3.2 hold. Assume that iterative process (2.2) generates
a sequence fxng such that estimate (3.1) holds. Then, for each '' 2 (0; p)
x \Lambda \Gamma ¦ 2 R
i \Gamma
F 0\Lambda (x \Lambda )F 0 (x \Lambda )
\Delta p\Gamma''
j
4. Iterative regularization methods for nonlinear equations in Banach spaces. In this section we
study the class of iterative methods
xn+1 = ¦ \Gamma \Theta
\Gamma F 0 (xn ); ff n
\Delta\Gamma F (xn ) \Gamma F 0 (x n )(x n \Gamma ¦)
\Delta ; x 0
2\Omega R (4:1)
for solving illíposed operator equations (1.1) when X 1 = X 2 = X are complex Banach space. Assume that
the operator F possesses the first and second GÓateaux derivatives and the operators F 0 (x); F 00 (x) satisfy (2.1)
with X 1 and X 2 replaced by X. Let the sequence of regularization parameters fff ng satisfy Assumption 2.1.
For simplicity we denote L(X) = L(X; X).
The aim of this section is to extend the results of Sections 2 and 3 for Hilbert spaces to iterations (4.1) in
the Banach space X. In this case, representation (1.7) with p ? 1=2 should be replaced by (1.8) with p ? 1.
Note that in the most interesting case when 0 2 oe(F 0 (x \Lambda )), the power F 0 (x \Lambda ) p can not be defined immediately
by (1.5), except for natural exponents p. For the definition and properties of fractional powers A p ; p ? 0 for
operators A 2 L(X) with 0 2 oe(A) we refer to [13, Ch. 4; 14, Ch. I]. Throughout this section we deal with the
case when the operator F is specified without errors. When F is noisy, following the scheme of Section 2, it is
not difficult to construct a stopping criterion for (4.1) that transforms (4.1) into a regularization algorithm.
To justify process (4.1), we first prove that formula (1.5) is applicable for the operator A = F 0 (xn ). Fixing
R 0 ? kF 0 (x \Lambda )k L(X) , for r ? 0; z 2 C and ' 2 (0; ‹) we define
K(') = fi 2 C : jargij 6 'g; S r (z) = fi 2 C : ji \Gamma zj 6 rg; K(r; ') = K(') `` S r (0)
Suppose the operator F 0 (x \Lambda ) satisfies the following condition.
Assumption 4.1. There are constants ' 0 2 (0; ‹) and C 2 ? 0 such that oe(F 0 (x \Lambda )) ae K(R 0 ; ' 0 ), and
kR
\Gamma Ö; F 0 (x \Lambda )
\Delta k L(X) 6
C 2
jÖj 8Ö 2 CnK(R 0 ; ' 0 ) (4:2)
Remark 4.1. Assumption 4.1 is fulfilled with some ' 0 2 (0; ‹) in each of the following cases.
1) F 0 (x \Lambda ) is an accretive operator in a Hilbert space X, i.e.,
Re(F 0 (x \Lambda )u; u) X ? 0 8u 2 X
where (\Delta; \Delta) X is the scalar product in X.
2) F 0 (x \Lambda ) is a spectral operator of the scalar type such that oe(F 0 (x \Lambda )) ae K(R 0 ; / 0 ) with / 0 2 (0; ‹) [15,
Ch. XV, Section 6]).
3) F 0 (x \Lambda ) satisfies oe(F 0 (x \Lambda )) ae K(R 0 ; / 0 ) with / 0 2 (0; ‹), and for all t ? 0 [13, Ch. 4, Section 14]
kR(\Gammat; F 0 (x \Lambda ))k L(X) 6
c 20
t
We also impose the following restriction on the generating functions \Theta(Ö; ff).

Numerical Methods and Programming, 2001, Vol. 2 75
Assumption 4.2. For each ff ? 0 the function \Theta(Ö; ff) is analytic in Ö on an open subset D ff ae C and
D ff oe K ff (R 0 ; C 3 ; ' 0 )
with
K ff (R 0 ; C 3 ; ' 0 ) = K(R 0 ; ' 0 ) [ S minfR0;C3 ffg (0)
and a constant C 3 2 (0; 1).
Using Assumptions 4.1 and 4.2, one can define the function of operator F 0 (xn ) in (4.1) as
\Theta
\Gamma
F 0 (xn ); ff n
\Delta = 1
2‹i
Z
\Gamma n
\Theta(Ö; ff n )R
\Gamma
Ö; F 0 (xn )) dÖ (4:3)
with an appropriate positively oriented contour \Gamma n . To justify (4.3), we recall the following wellíknown propoí
sition (see, e.g., [16, Section 5.4]).
Lemma 4.1. Let Ö 2 ae(A); A 2 L(X) and B 2 L(X).
1) Assume that kBR(Ö; A)k L(X) ! 1. Then, Ö 2 ae(A +B). Moreover, the following representation is valid:
R(Ö; A+B) = R(Ö; A)
1
X
k=0
\Gamma BR(Ö;A)
\Delta k (4:4)
2) Assume that kR(Ö; A)Bk L(X) ! 1. Then, Ö 2 ae(A + B) and
R(Ö; A+ B) =
1
X
k=0
\Gamma R(Ö; A)B
\Delta k R(Ö; A) (4:5)
The series in (4.3) and (4.5) converge absolutely in L(X).
Given a subset G ae C, we denote by int G and fr G = GnintG the interior and the boundary of G,
respectively.
Lemma 4.2. Let the inequalities
kxn \Gamma x \Lambda kX 6
C 3 ff nÚ 0
N 2 C 2
; C 3 ff 0 Ú 0
N 2 C 2
6 R (4:6)
hold with some Ú 0 2 (0; 1). Then, oe(F 0 (xn )) ae intK ff n (R 0 ; C 3 ; ' 0 ).
Proof. By construction of K ff (R 0 ; C 3 ; ' 0 ), for all Ö 2 CnintK ff n (R 0 ; C 3 ; ' 0 ) we have jÖj ? C 3 ff n . Note that
xn
2\Omega R by (4.6). Letting in (4.4) A = F 0 (xn ) and B = F 0 (x \Lambda ) \Gamma F 0 (x n ), with the use of (2.1) and (4.2) we get
kBR(Ö; A)k L(X) 6 kF 0 (xn ) \Gamma F 0 (x \Lambda )k L(X) kR(Ö; F 0 (x \Lambda ))k L(X) 6
N 2 C 2 kxn \Gamma x \Lambda kX
jÖj 6 Ú 0
Therefore, Ö 2 ae(F 0 (xn )). This completes the proof.
Let f\Gamma ff g ff2(0;1) be a family of positively oriented contours such that \Gamma ff ae D ff and \Gamma ff surrounds
int K ff (R 0 ; C 3 ; ' 0 ) for each ff ? 0. Suppose in addition that \Gamma ff does not include the point Ö = \GammaC 4 ff with
C 4 2 (C 3 ; 1) fixed. Notice that such families do exist by Assumptions 4.1 and 4.2. Lemma 4.2 now implies that
if inequalities (4.6) hold, then (4.3) is valid with \Gamma n = \Gamma ff n .
Assume that the error x \Lambda \Gamma ¦ possesses a sourcewise representation
x \Lambda \Gamma ¦ = F 0 (x \Lambda ) p v; v 2 X; p ? 1 (4:7)
In this connection, we recall some necessary concepts and results from the theory of fractional powers of linear
operators in Banach spaces.
Let A 2 L(X). For an exponent p 2 f1; 2; : : :g, according to the usual definition, let us define A p def = A \Delta : : : \Delta A
(p times). Suppose now that the operator A 2 L(X) satisfies Assumption 4.1 (with F 0 (x \Lambda ) replaced by A) and
that the exponent p 2 (0; 1). Then, the power A p is defined as (see [14, Ch. I, Section 5; 17])
A p def =
sin ‹p

1
Z
0
t p\Gamma1 (tE +A) \Gamma1 A dt (4:8)

76 Numerical Methods and Programming, 2001, Vol. 2
By (4.2), the integral in (4.8) converges in Bochner sense and represents the operator A p 2 L(X). When
p 2 (m; m + 1) with m 2 f1; 2; : : :g, the operator A p is defined by
A p def = A p\Gammam A m j A m A p\Gammam
Let A 2 L(X) satisfy Assumption 4.1. Denote A '' = A + ''E; '' ? 0. Then, for all '' ? 0 and p ? 0 the
power A p
'' can be defined by formula (1.5) with '(Ö) = Ö p provided that the contour fl surrounds the spectrum
oe(A '' ) = fÖ + '': Ö 2 oe(A)g and does not include the point Ö = 0.
Lemma 4.3 ([14, Ch. I, Section 5]). Let an operator A 2 L(X) satisfy Assumption 4.1. Then, for each
p 2 (0; 1)
kA p
'' \Gamma A p k L(X) 6 c 21 '' p 8'' ? 0 (4:9)
with a constant c 21 depending on A and p only.
The following considerations are focussed on the deduction of the estimate
kxn \Gamma x \Lambda kX 6 l 0 kvkX ff p
n (4:10)
for the rate of convergence of iterations (4.1). The value of the coefficient l 0 as well as necessary restrictions on
kvkX will be derived during our examinations. Suppose (4.10) holds at nth iteration and l 0 kvkX ff p
0 6 R. Let us
prove that (4.10) remains valid at (n + 1)th step. From (4.10) and Lemma 4.2 it follows that (4.3) is applicable
with \Gamma n = \Gamma ff n provided that
ff p\Gamma1
0 N 2 C 2 l 0 kvkX
C 3
6 Ú 0 ; l 0 kvkX ff p
0 6 R (4:11)
Assume that inequalities (4.11) hold. By (4.1),
xn+1 \Gamma x \Lambda = \Gamma \Theta
\Gamma F 0 (xn ); ff n
\Delta
G(xn ) \Gamma
i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda )
j
(x \Lambda \Gamma ¦)
\Gamma
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (xn ); ff n
\Delta
F 0 (x n )
j
(x \Lambda \Gamma ¦)
(4:12)
where
G(xn ) = F (xn ) + F 0 (xn )(x \Lambda \Gamma xn )
As in Section 2, from (2.1) it follows
kG(xn )k X 6 N 2 kxn \Gamma x \Lambda k 2
X (4:13)
From (4.7) and (4.12), we obtain
kxn+1 \Gamma x \Lambda kX 6 k\Theta
\Gamma F 0 (xn ); ff n
\Delta k L(X) kG(xn )k X + k
i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda )
j
F 0 (x \Lambda ) p vkX
+ k
i
\Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0 (x n ); ff n
\Delta F 0 (xn )
j
F 0 (x \Lambda ) p vkX
(4:14)
In view of (4.3),
k\Theta
\Gamma F 0 (xn ); ff n
\Delta
k L(X) 6
1
2‹
Z
\Gamma ff n
j\Theta(Ö; ff n )jkR(Ö; F 0 (xn ))k L(X) jdÖj
By (2.1), (4.2), and (4.11),
kF 0 (xn ) \Gamma F 0 (x \Lambda )k L(X) kR(Ö; F 0 (x \Lambda ))k L(X) 6 Ú 0 ! 1 (4:15)
Hence, from Lemma 4.1 it follows that for all Ö 2 \Gamma ff n
kR
\Gamma
Ö; F 0 (xn )
\Delta
k L(X) 6 kR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
k L(X)
1
X
k=0
i
kF 0 (xn ) \Gamma F 0 (x \Lambda )k L(X) kR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
k L(X)
j k
(4:16)
Using (4.2), (4.15), and (4.16), we get
kR
\Gamma Ö; F 0 (x n )
\Delta k L(X) 6
C 2
(1 \Gamma Ú 0 )jÖj
8Ö 2 \Gamma ff n

Numerical Methods and Programming, 2001, Vol. 2 77
Therefore,
k\Theta
\Gamma
F 0 (x n ); ff n
\Delta
k L(X) 6
C 2
2‹(1 \Gamma Ú 0 )
Z
\Gamma ff n
j\Theta(Ö; ff n )j
jÖj j dÖj (4:17)
To continue our examinations of (4.14), we need the following assumption.
Assumption 4.3.
sup
ff2(0;ff 0 ]
i
ff
Z
\Gamma ff
j\Theta(Ö; ff)j
jÖj jdÖj
j
! 1 (4:18)
By (4.18), we get
R
\Gamma ff
j\Theta(Ö;ff)j
jÖj jdÖj 6
c 22
ff n
; n = 0; 1; : : : : From (4.10), (4.13), and (4.17), for the first term in
the rightíhand side of (4.14) we obtain
k\Theta
\Gamma F 0 (xn ); ff n
\Delta k L(X) kG(xn )k X 6
C 2 c 22 N 2 ff p\Gamma1
0
2‹(1 \Gamma Ú 0 ) l 2
0 kvk 2
X ff p
n j c 23 l 2
0 kvk 2
X ff p
n (4:19)
Let m = [p] and ï = fpg; fpg = p \Gamma [p]; be the integer and fractional parts of p, respectively. For the second
term in (4.14) we can write
k
i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda )
j
F 0 (x \Lambda ) p vkX
6 k
i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda )
j
F 0 (x \Lambda ) m (F 0 (x \Lambda ) + C 4 ff n E) ï vkX
+ k
i
E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda )
j
F 0 (x \Lambda ) m
i \Gamma
F 0 (x \Lambda ) +C 4 ff nE
\Delta ï
\Gamma F 0 (x \Lambda ) ï
j
vkX
(4:20)
In connection with (4.20), we impose the following restriction on \Theta(Ö; ff).
Assumption 4.4. For all ff 2 (0; ff 0 ] and œ 2 f0g [ [1; Ó
p]; Ó
p ? 1;
Z
\Gamma ff
j1 \Gamma \Theta(Ö; ff)Öj jÖj œ \Gamma1 jdÖj 6 c 24 ff œ (4:21)
with a constant c 24 depending on p only.
Let p 2 [1; Ó
p]. By construction of \Gamma ff , the point Ö = \GammaC 4 ff n lies outside the contour \Gamma ff n . Hence, from (1.5)
i
E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda )
j
F 0 (x \Lambda ) m (F 0 (x \Lambda ) + C 4 ff nE) ï
= 1
2‹i
Z
\Gamma ff n
(1 \Gamma \Theta(Ö; ff n )Ö)Ö m (Ö +C 4 ff n ) ï R(Ö; F 0 (x \Lambda )) dÖ
Then by (4.2), (4.21), and the inequality jÖj ? C 3 ff n 8Ö 2 \Gamma ff n , we have
k
i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda )
j
F 0 (x \Lambda ) m
\Gamma F 0 (x \Lambda ) + C 4 ff nE
\Delta ï
vkX
6
1
2‹ kvkX
Z
\Gamma ff n
j1 \Gamma \Theta(Ö; ff n )ÖjjÖj m jÖ + C 4 ff n j ï kR(Ö; F 0 (x \Lambda ))k L(X) jdÖj
6
C 2
2‹ kvkX
Z
\Gamma ff n
j1 \Gamma \Theta(Ö; ff n )ÖjjÖj m\Gamma1 (jÖj ï + (C 4 ff n ) ï )jdÖj 6
C 2 c 24
2‹
i
1 +
i C 4
C 3
j ï j
kvkX ff p
n
(4:22)
Since m 2 [1; Ó
p], inequalities (4.9) and (4.21) with œ = m yield
k
i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda )
j
F 0 (x \Lambda ) m
i
(F 0 (x \Lambda ) + C 4 ff nE) ï \Gamma F 0 (x \Lambda ) ï
j
vkX
6 c 21 (C 4 ff n ) ï kvkX k
i
E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda )
j
F 0 (x \Lambda ) m k L(X) 6
C 2 c 21 c 24
2‹ C ï
4 kvkX ff p
n
(4:23)
Combining (4.20), (4.22), and (4.23), we get
fl fl i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda )
j
F 0 (x \Lambda ) p v
fl fl X 6
C 2 c 24
2‹
i
1 +
i C 4
C 3
j ï
+ c 21 C ï
4
j
kvkX ff p
n j c 25 kvkX ff p
n (4:24)

78 Numerical Methods and Programming, 2001, Vol. 2
For the third term in (4.14) we have the estimate
k
i
\Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0 (xn ); ff n
\Delta F 0 (xn )
j
F 0 (x \Lambda ) p vkX
6 k
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (x n ); ff n
\Delta
F 0 (x n )
j
F 0 (x \Lambda ) m (F 0 (x \Lambda ) + C 4 ff nE) ï vkX
+ k
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (xn ); ff n
\Delta
F 0 (xn )
j
F 0 (x \Lambda ) m
i \Gamma
F 0 (x \Lambda ) + C 4 ff nE
\Delta ï
\Gamma F 0 (x \Lambda ) ï
j
vkX
(4:25)
From (4.2) and (4.3) it follows
k
i
\Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0 (xn ); ff n
\Delta F 0 (xn )
j
F 0 (x \Lambda ) m (F 0 (x \Lambda ) +C 4 ff nE) ï vkX
6 k
i \Gamma
E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda )
\Delta
\Gamma
\Gamma
E \Gamma \Theta
\Gamma
F 0 (xn ); ff n
\Delta
F 0 (xn )
\Delta j
F 0 (x \Lambda ) m k L(X) k
\Gamma
F 0 (x \Lambda ) + C 4 ff nE
\Delta ï
k L(X) kvkX
6
1
2‹ k
\Gamma
F 0 (x \Lambda ) + C 4 ff nE
\Delta ï
k L(X) kvkX
Z
\Gamma ff n
j1 \Gamma \Theta(Ö; ff n )Öj k
i
R
\Gamma
Ö; F 0 (x \Lambda )
\Delta
\Gamma R
\Gamma
Ö; F 0 (x n )
\Delta j
F 0 (x \Lambda ) m k L(X) j dÖj
(4:26)
By (4.9),
k
\Gamma
F 0 (x \Lambda ) + C 4 ff nE
\Delta ï
k L(X) 6 kF 0 (x \Lambda ) ï k L(X) + c 21 (C 4 ff 0 ) ï (4:27)
From (4.5), (4.10), and (4.16) for all Ö 2 \Gamma ff n we get
k
i
R
\Gamma
Ö; F 0 (x \Lambda )
\Delta
\Gamma R
\Gamma
Ö; F 0 (xn )
\Delta j
F 0 (x \Lambda ) m k L(X)
6
kR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
k L(X) kF 0 (xn ) \Gamma F 0 (x \Lambda )k L(X)
1 \Gamma kR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
k L(X) kF 0 (x n ) \Gamma F 0 (x \Lambda )k L(X)
kR
\Gamma Ö; F 0 (x \Lambda )
\Delta F 0 (x \Lambda ) m k L(X)
6
C 2 N 2
(1 \Gamma Ú 0 )jÖj kR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
F 0 (x \Lambda ) m k L(X) l 0 kvkX ff p
n
(4:28)
Since m ? 1, the equality
R
\Gamma
Ö; F 0 (x \Lambda )
\Delta
F 0 (x \Lambda ) = \GammaE + ÖR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
and (4.2) imply that
kR(Ö; F 0 (x \Lambda ))F 0 (x \Lambda ) m k L(X) 6 (1 + C 2 )kF 0 (x \Lambda )k m\Gamma1
L(X) (4:29)
From (4.21) and (4.26) -- (4.29) we obtain
k
i
\Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0 (xn ); ff n
\Delta F 0 (xn )
j
F 0 (x \Lambda ) m (F 0 (x \Lambda ) + C 4 ff nE) ï vkX
6
C 2 (1 + C 2 )N 2
2‹(1 \Gamma Ú 0 ) (kF 0 (x \Lambda ) ï k L(X) + c 21 (C 4 ff n ) ï )kF 0 (x \Lambda )k m\Gamma1
L(X) l 0 kvk 2
X ff p
n
Z
\Gamma ff n
j1 \Gamma \Theta(Ö; ff n )Öj
jÖj jdÖj
6
C 2 (1 + C 2 )c 24 N 2
2‹(1 \Gamma Ú 0 ) (kF 0 (x \Lambda ) ï k L(X) + c 21 (C 4 ff 0 ) ï )kF 0 (x \Lambda )k m\Gamma1
L(X) l 0 kvk 2
X ff p
n j c 26 l 0 kvk 2
X ff p
n
(4:30)
In a similar way, for the second term in (4.25) we get
k
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (xn ); ff n
\Delta
F 0 (x n )
j
F 0 (x \Lambda ) m
i
(F 0 (x \Lambda ) + C 4 ff nE) ï \Gamma F 0 (x \Lambda ) ï
j
vkX
6 c 27 l 0 kvk 2
X ff p
n (4:31)
Estimates (4.14), (4.19), (4.24), (4.25), (4.30), and (4.31) imply
kxn+1 \Gamma x \Lambda kX 6
i
c 23 l 2
0 kvk 2
X + c 25 kvkX + c 28 l 0 kvk 2
X
j
ff p
n (4:32)
with c 28 = c 26 + c 27 . By (4.32) and Assumption 2.1,
kxn+1 \Gamma x \Lambda kX 6
i
c 23 l 0 kvkX r p + c 25 l \Gamma1
0 r p + c 28 kvkX r p
j
l 0 kvkX ff p
n+1 (4:33)

Numerical Methods and Programming, 2001, Vol. 2 79
From (4.10), (4.11), and (4.33) and by induction we come to the following theorem on convergence of iteraí
tions (4.1).
Theorem 4.1. Let (4.11) and Assumptions 2.1 and 4.1--4.4 be fulfilled. Suppose that the error x \Lambda \Gamma ¦
possesses the sourcewise representation (4.7) with p 2 [1; Ó
p]. Let l 0 ? c 25 r p . Assume that
kx 0 \Gamma x \Lambda kX 6 l 0 kvkX ff p
0
and
kvkX 6 min
n l 0 \Gamma c 25 r p
(c 23 l 0 + c 28 ) r p l 0
;
C 3 Ú 0
ff p\Gamma1
0 N 2 C 2 l 0
; R(l 0 ff p
0 ) \Gamma1
o
Then, for all n = 0; 1; : : : the following estimate is valid:
kxn \Gamma x \Lambda kX 6 l 0 kvkX ff p
n (4:34)
Corollary 4.1. Let the hypotheses of Theorem 4.1 be fulfilled. Then,
lim
n!1
kxn \Gamma x \Lambda kX = 0
5. Necessity of the sourcewise representation for estimating the rate of convergence for ití
erative methods. In this section, under appropriate conditions on the generating function \Theta(Ö; ff), we prove
that representation (4.7) is almost necessary for (4.34). In other words, we prove that the estimate
kxn \Gamma x \Lambda kX 6 C 5 ff p
n ; n = 0; 1; : : :; p ? 1 (5:1)
where C 5 is an absolute constant, implies that x \Lambda \Gamma ¦ 2 R(F 0 (x \Lambda ) p\Gamma'' ) for each '' 2 (0; p) (cf. Theorem 3.1).
Suppose Assumptions 2.1 and 4.1 are fulfilled. Increasing (if necessary) C 5 , without loss of generality we
may assume that conditions (4.6) hold for all numbers n = 0; 1; : : :. Therefore, the assertion of Lemma 4.2
remains true and representation (4.3) is valid with \Gamma n = \Gamma ff n ; n = 0; 1; : : :. Equality (4.12) implies
k
i
E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda )
j
(x \Lambda \Gamma ¦)k X
6 kxn+1 \Gamma x \Lambda kX k\Theta
\Gamma F 0 (xn ); ff n
\Delta k L(X) kG(xn )k X
+ k
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (xn ); ff n
\Delta
F 0 (x n )
j
(x \Lambda \Gamma ¦)k X
(5:2)
By Assumption 2.1, (4.13), (4.17), (4.18), and (5.1), we have
kxn+1 \Gamma x \Lambda kX + k\Theta
\Gamma
F 0 (xn ); ff n
\Delta
k L(X) kG(xn )k X 6 c 29 ff p
n (5:3)
The third term in (5.2) can be estimated as follows
k
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (x n ); ff n
\Delta
F 0 (xn )
j
(x \Lambda \Gamma ¦)k X
= k
i
(E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda )) \Gamma (E \Gamma \Theta
\Gamma
F 0 (xn ); ff n
\Delta
F 0 (x n ))
j
(x \Lambda \Gamma ¦)k X
6
1
2‹ kx \Lambda \Gamma ¦ k
Z
\Gamma ff n
j\Theta(Ö; ff n )Ö \Gamma 1j kR(Ö; F 0 (x \Lambda )) \Gamma R(Ö; F 0 (xn ))k L(X) jdÖj
(5:4)
Using (5.1) and the inequality jÖj ? C 3 ff n 8Ö 2 \Gamma ff n , as in (4.28) (with m = 0) for all Ö 2 \Gamma ff n we get
kR(Ö; F 0 (x \Lambda )) \Gamma R(Ö; F 0 (x n ))k L(X) 6
c 30 kxn \Gamma x \Lambda kX
jÖj 2 6
c 31 ff p\Gamma1
n
jÖj (5:5)
Letting in (4.21) œ = 0, from (5.4) and (5.5) we infer
k
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (x n ); ff n
\Delta
F 0 (xn )
j
(x \Lambda \Gamma ¦)k X 6 c 32 ff p\Gamma1
n (5:6)

80 Numerical Methods and Programming, 2001, Vol. 2
For ff ? 0, denote (cf. (3.6))
y ff =
i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ); ff
\Delta F 0 (x \Lambda )
j
(x \Lambda \Gamma ¦); \Phi(ff) = ky ff kX (5:7)
Combining (5.2), (5.3), (5.6), and (5.7), we come to the following assertion.
Lemma 5.1. Let (5.1) be fulfilled. Then,
\Phi(ff n ) 6 c 33 ff p\Gamma1
n ; n = 0; 1; : : : (5:8)
From here on, it is convenient to deal with a continuous regularization parameter ff ? 0 rather than with
the sequence fff ng. We denote
fl ff = fr K ff (R 0 ; C 3 ; ' 0 ); ff ? 0
Assumption 4.2 implies that the operator
\Theta
\Gamma
F 0 (x \Lambda ); ff
\Delta
= 1
2‹i
Z
fl ff
\Theta(Ö; ff)R
\Gamma
Ö; F 0 (x \Lambda )
\Delta
dÖ (5:9)
is wellídefined and bounded for all ff ? 0. Our further restrictions on \Theta(Ö; ff) are as follows.
Assumption 5.1. For all ff ? 0, 1 \Gamma \Theta(Ö; ff)Ö 6= 0 8Ö 2 K ff (R 0 ; C 3 ; ' 0 ).
Given ff; fi ? 0, denote
¼(Ö; ff; fi) = 1 \Gamma \Theta(Ö; ff)Ö
1 \Gamma \Theta(Ö; fi)Ö ; Ö 2 C
Assumption 5.2. There exists a constant r 0 ? 0 such that
Z
fl ff n+1
j¼(Ö; ff; ff n )j
jÖj jdÖj 6 c 34 (1 + j ln ff n j r0 ) 8ff 2 (ff n+1 ; ff n ]; n = 0; 1; : : : (5:10)
Due to Assumptions 5.1, the operator ¼(F 0 (x \Lambda ); ff; fi) is wellídefined and bounded for all ff; fi ? 0. By (5.8)
and (5.10), for all ff 2 (ff n+1 ; ff n ] we have
\Phi(ff) = k(E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff
\Delta
F 0 (x \Lambda ))(E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda )) \Gamma1 (E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ))(x \Lambda \Gamma ¦)k X
6 k¼
\Gamma
F 0 (x \Lambda ); ff; ff n
\Delta
k L(X) \Phi(ff n ) 6
1
2‹ \Phi(ff n )
Z
fl ff n+1
j¼(Ö; ff; ff n )jkR(Ö; F 0 (x \Lambda ))k L(X) jdÖj
6 c 35 (1 + j ln ff n j r0 ) ff p\Gamma1
n
(5:11)
(see [12]). Let ß 2 (0; p \Gamma 1). From Assumption 2.1 and (5.11) for all ff 2 (ff n+1 ; ff n ] we get
\Phi(ff)
ff ß = \Phi(ff)ff p\Gamma1\Gammaß
ff p\Gamma1 6 c 35 (1 + j ln ff n j r0 ) ff p\Gamma1\Gammaß
n
i ff n
ff n+1
j p\Gamma1
6 c 36
with a constant c 36 depending on the chosen ß. Thus, we have proved the following proposition.
Lemma 5.2. Let (5.1) and Assumptions 2.1, 5.1 and 5.2 be fulfilled. Then,
\Phi(ff) 6 c 36 ff ß 8ff 2 (0; ff 0 ] (5:12)
We continue with the following two additional assumptions.
Assumption 5.3. There exists s 0 ? 0 such that for all s 2 (0; s 0 ]
sup
ff2[ff 0 ;1)
i
ff \Gammas
Z
fl ff
j\Theta(Ö; ff)Ö \Gamma 1j
jÖj jdÖj
j
! 1 (5:13)

Numerical Methods and Programming, 2001, Vol. 2 81
Assumption 5.4. There exists '' 0 ? 0 such that the function \Theta(Ö; ff) is continuous in Ö; ff on the set
D(R 0 ; '' 0 ; C 3 ; ' 0 ) = f(Ö; ff): Ö 2 K ff (R 0 + '' 0 ; C 3 ; ' 0 ); ff ? 0g
Due to Assumption 5.4, the mapping ff ! y ff is continuous from (0; 1) into X. By (5.7), (5.9), (5.12),
and (5.13), for all q; 0 ! q ! minf 2
3 ß; 2s 0 g we have
1
R
0
ff \Gammaß\Gamma1+q ky ff kX dff =
ff 0
Z
0
ff \Gammaß\Gamma1+q \Phi(ff) dff +
1
Z
ff 0
ff \Gammaß\Gamma1+q \Phi(ff) dff
6 c 36
ff 0
Z
0
ff \Gamma1+q dff + c 37
1
Z
ff 0
ff \Gammaß\Gamma1+ 3
2 q
i
ff \Gamma q
2
Z
fl ff
j\Theta(Ö; ff)Ö \Gamma 1j
jÖj jdÖj
j
dff ! 1
(5:14)
Therefore, the integral
w q =
1
Z
0
ff \Gammaß\Gamma1+q y ff dff (5:15)
exists in Bochner sense and represents an element w q 2 X.
Our next aim is to prove that for all '' 2 (0; '' 0 ]
\Gamma
F 0 (x \Lambda ) + ''E
\Delta ß\Gammaq
w ('')
q = C(ß; q)(x \Lambda \Gamma ¦) (5:16)
with
w ('')
q =
1
Z
0
ff \Gammaß\Gamma1+q
i
E \Gamma \Theta
\Gamma
F 0 (x \Lambda ) + ''E; ff
\Delta\Gamma
F 0 (x \Lambda ) + ''E
\Delta j
(x \Lambda \Gamma ¦) dff (5:17)
and C(ß; q) 6= 0. To begin with, we establish the existence of the Bochner integral
u ('')
q =
1
Z
0
ff \Gammaß\Gamma1+q
i
\Theta
\Gamma F 0 (x \Lambda ); ff
\Delta F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0 (x \Lambda ) + ''E; ff
\Delta\Gamma F 0 (x \Lambda ) + ''E
\Delta j
(x \Lambda \Gamma ¦) dff (5:18)
for all '' 2 (0; '' 0 ]. The spectral mapping theorem [12, Ch. XI, Section 1] and Assumption 5.1 imply that the
operator E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff
\Delta
F 0 (x \Lambda ) has a continuous inverse for all ff ? 0. Therefore,
1
Z
0
ff \Gammaß\Gamma1+q
i
\Theta
\Gamma
F 0 (x \Lambda ); ff
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (x \Lambda ) + ''E; ff
\Delta\Gamma
F 0 (x \Lambda ) + ''E
\Delta j
(x \Lambda \Gamma ¦) dff
=
1
Z
0
ff \Gammaß\Gamma1+q
i
\Theta
\Gamma
F 0 (x \Lambda ); ff)F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (x \Lambda ) + ''E; ff)(F 0 (x \Lambda ) + ''E)
j
\Theta(E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff
\Delta
F 0 (x \Lambda )) \Gamma1 (E \Gamma \Theta
\Gamma
F 0 (x \Lambda ); ff
\Delta
F 0 (x \Lambda ))(x \Lambda \Gamma ¦) dff
=
1
Z
0
ff \Gammaß\Gamma1+q /(F 0 (x \Lambda ); ff; '')y ff dff
(5:19)
In (5.19) we have introduced the notation
/(Ö; ff; '') = \Theta(Ö; ff)Ö \Gamma \Theta(Ö + ''; ff)(Ö + '')
1 \Gamma \Theta(Ö; ff)Ö (5:20)
Due to (5.20) and Assumptions 5.1 and 5.4, the function /(Ö; ff; '') is continuous in Ö; ff on D(R 0 ; 0; C 3 ; ' 0 )
for each '' 2 (0; '' 0 ]. Hence, the operator function /(F 0 (x \Lambda ); ff; '') is continuous in ff when ff ? 0. To prove

82 Numerical Methods and Programming, 2001, Vol. 2
the existence of Bochner integrals in (5.18) and (5.19), it remains to establish the convergence of the Lebesgue
integral in the rightíhand side of the estimate
1
Z
0
ff \Gammaß\Gamma1+q k/(F 0 (x \Lambda ); ff; '')y ff kX dff 6
1
Z
0
ff \Gammaß\Gamma1+q \Phi(ff)k/(F 0 (x \Lambda ); ff; '')k L(X) dff (5:21)
By virtue of (4.2),
k/(F 0 (x \Lambda ); ff; '')k L(X) 6
1
2‹
Z
fl ff
j/(Ö; ff; '')j kR(Ö; F 0 (x \Lambda ))k L(X) jdÖj 6 c 38
Z
fl ff
j/(Ö; ff; '')j
jÖj jdÖj
Having this in mind, let us restrict the previous assumptions on \Theta(Ö; ff) by the following one.
Assumption 5.5. There exists '' 1 2 (0; '' 0 ] and t 0 ? 0 such that for all '' 2 (0; '' 1 ]
Z
fl ff
j/(Ö; ff; '')j
jÖj jdÖj = M (ff; '') ! 1 8ff ? 0 (5:22)
Additionally, assume that for all t 2 (0; t 0 ]
sup
ff2(0;ff 0 ]
i
ff t M (ff; '')
j
+ sup
ff2[ff 0 ;1)
i
ff \Gammat M (ff; '')
j
! 1 (5:23)
and
sup
ff2(0;ff 0 ]
i
ff t
'' 1
Z
0
M (ff; '')
''
d''
j
+ sup
ff2[ff 0 ;1)
i
ff \Gammat
'' 1
Z
0
M (ff; '')
''
d''
j
! 1 (5:24)
From (5.12) -- (5.14), (5.21), and (5.23) for all q; 0 ! q ! minf ß
2 ; 2s 0 g we get
1
Z
0
ff \Gammaß\Gamma1+q \Phi(ff)k/(F 0 (x \Lambda ); ff; '')k L(X) dff
6 c 38
i ff 0
Z
0
ff \Gammaß\Gamma1+ q
2 \Phi(ff)
i
ff q
2 M (ff; '')
j
dff +
1
Z
ff 0
ff \Gammaß\Gamma1+ 3
2 q \Phi(ff)
i
ff \Gamma q
2 M (ff; '')
j
dff
j
6 c 39
i ff 0
Z
0
ff \Gamma1+ q
2 dff +
1
Z
ff 0
ff \Gammaß\Gamma1+2q
i
ff \Gamma q
2
Z
fl ff
j\Theta(Ö; ff) \Gamma 1Öj
Ö jdÖj
j
dff
j
! 1
Since w ''
q = u ''
q + w q , the integrals in (5.17) and (5.18) do exist for all '' 2 (0; '' 1 ].
Using (5.19), (5.22), and the Fubini theorem, we come to the estimate
'' 1
Z
0
kw ('')
q \Gamma w q kX
'' d'' 6
'' 1
Z
0
'' \Gamma1 ff \Gammaß\Gamma1+q \Phi(ff)k/(F 0 (x \Lambda ); ff; '')k L(X) dff
6 c 40
i ff 0
Z
0
ff \Gammaß\Gamma1+ q
2 \Phi(ff)
i
ff q
2
'' 1
Z
0
M (ff; '')
''
d''
j
dff +
1
Z
ff 0
ff \Gammaß\Gamma1+ 3
2 q \Phi(ff)
i
ff \Gamma q
2
'' 1
Z
0
M (ff; '')
''
d''
j
dff
j
for all q 2 (0; q 0 ), where q 0 = minf ß
2 ; 2s 0 ; 2t 0 g. Therefore, from (5.12) and (5.24) we get
'' 1
Z
0
kw ('')
q \Gamma w q kX
'' d'' 6 c 41
i ff 0
Z
0
ff \Gamma1+ q
2 dff +
1
Z
ff 0
ff \Gammaß\Gamma1+ 3
2 q dff
j
! 1 (5:25)
The following result is a trivial consequence of the foregoing discussions.

Numerical Methods and Programming, 2001, Vol. 2 83
Lemma 5.3. Let Assumptions 2.1, 4.1, 4.2, and 5.1 -- 5.5 be fulfilled and q 2 (0; q 0 ). Then, there exists a
sequence f'' n g such that '' n ? 0; lim
n!1
'' n = 0 and
lim
n!1
kw ('' n )
q \Gamma w q kX = 0
Proof. In fact, suppose that there exists ! 0 ? 0 and '' 2 2 (0; '' 1 ] such that
kw ('')
q \Gamma w q kX ? ! 0 ? 0 8'' 2 (0; '' 2 ]
Then, in contrary to (5.25) we obtain
'' 1
Z
0
kw ('')
q \Gamma w q kX
'' d'' ?
'' 2
Z
0
! 0
'' d'' = 1
This completes the proof.
Denote m = [ß \Gamma q]. Assuming q ? 0 to be small enough, we get ß \Gamma q 2 (m; m + 1). According to (4.8),
(F 0 (x \Lambda ) + ''E) ß\Gammaq w ('')
q
= (\Gamma1) m sin ‹(ß \Gamma q)

1
Z
0
t ß\Gammaq\Gammam\Gamma1 (F 0 (x \Lambda ) + (t + '')E) \Gamma1 (F 0 (x \Lambda ) + ''E) m+1 w ('')
q dt
Hence, from (5.17) one gets
(F 0 (x \Lambda ) + ''E) ß\Gammaq w ('')
q = (\Gamma1) m sin ‹(ß \Gamma q)

1
Z
0
1
Z
0
t ß\Gammaq\Gammam\Gamma1 ff \Gammaß\Gamma1+q
\Theta(F 0 (x \Lambda ) + (t + '')E) \Gamma1 (F 0 (x \Lambda ) + ''E) m+1
i
E \Gamma \Theta
\Gamma
F 0 (x \Lambda ) + ''E; ff)(F 0 (x \Lambda ) + ''E)
j
(x \Lambda \Gamma ¦) dff dt
(5:26)
Given positive r; r 1 ; r 2 and z; z 1 ; z 2 2 C such that r 1 6 r 2 ; z 6= 0; arg z 1 6 arg z 2 , we denote
\Gamma r (z 1 ; z 2 ) = fi 2 C: jij = r; arg z 1 6 arg i 6 arg z 2 g
\Gamma (r1 ;r 2 ) (z) = fi 2 C: arg i = arg z; r 1 6 jij 6 r 2 g
Let Z(Ö; t; ff) = (t + Ö) \Gamma1 Ö m+1 (1 \Gamma \Theta(Ö; ff)Ö). To specify (1.5) for the operator
(F 0 (x \Lambda ) + (t + '')E) \Gamma1 (F 0 (x \Lambda ) + ''E) m+1
i
E \Gamma \Theta
\Gamma F 0 (x \Lambda ) + ''E; ff
\Delta\Gamma F 0 (x \Lambda ) + ''E
\Delta j
= Z(F 0 (x \Lambda ); t; ff)
we define the contour fl 2 C as fl = \Gamma ('') , where
\Gamma ('') = \Gamma ''
2
(e \Gammai' 0 ; e i'0 ) [ \Gamma R0 (e \Gammai' 0 ; e i'0 ) [ \Gamma ( ''
2 ;R0 ) (e i'0 ) [ \Gamma ( ''
2 ;R0 ) (e \Gammai' 0 )
It is not difficult to see that \Gamma ('') surrounds the spectrum oe
\Gamma
F 0 (x \Lambda ) + ''E
\Delta = fÖ + '': Ö 2 oe(F 0 (x \Lambda ))g and, for all
t; ff ? 0, belongs to a domain where the function Z(Ö; t; ff) is analytic in Ö. Then, by (1.5) and (5.26),
(F 0 (x \Lambda ) + ''E) ß\Gammaq w ('')
q
= D(ß; q)
1
Z
0
1
Z
0
Z
\Gamma ('')
ff \Gammaß\Gamma1+q t ß\Gammaq\Gammam\Gamma1 Z(Ö; t; ff)R(Ö; F 0 (x \Lambda ) + ''E)(x \Lambda \Gamma ¦) dÖ dff dt
(5:27)
where
D(ß; q) = (\Gamma1) m sin ‹(ß \Gamma q)
2‹ 2 i
Since for each '' 2 (0; '' 1 ]
sup
Ö2\Gamma ('')
kR(Ö; F 0 (x \Lambda ) + ''E)(x \Lambda \Gamma ¦)k X = E('') ! 1

84 Numerical Methods and Programming, 2001, Vol. 2
we have
J =
Z
\Gamma ('')
i 1
Z
0
1
Z
0
ff \Gammaß\Gamma1+q t ß\Gammaq\Gammam\Gamma1 jZ(Ö; t; ff)j kR
\Gamma
Ö; F 0 (x \Lambda ) + ''E
\Delta
(x \Lambda \Gamma ¦)k X dff dt
j
jdÖj
6 E('')
Z
\Gamma ('')
jÖj m+1
i 1
Z
0
ff \Gammaß\Gamma1+q j\Theta(Ö; ff)Ö \Gamma 1j dff
ji 1
Z
0
t ß\Gammaq\Gammam\Gamma1 jt + Öj \Gamma1 dt
j
jdÖj
(5:28)
To obtain an estimate for the first internal integral in (5.28), we impose on \Theta(Ö; ff) the following restriction.
Assumption 5.6. The function g(i) = 1 \Gamma \Theta(Ö; Öi) Ö does not depend on Ö when Ö 2 K(' 0 )nf0g and
g(i) is analytic on an open set D 0 oe K(' 0 )nf0g. Moreover, for all ! 2 (0; p)
sup
j'j6'0
1
Z
0
œ \Gamma!\Gamma1 jg(e i' œ )jdœ = N (!) ! 1 (5:29)
and
lim
r!0+
r \Gamma!\Gamma1
Z
\Gamma r (e \Gammai' 0 ;e i' 0 )
jg(i)jjdij = 0 (5:30)
lim
R!1
R \Gamma!\Gamma1
Z
\Gamma R (e \Gammai' 0 ;e i' 0 )
jg(i)j jdij = 0 (5:31)
Using the substitutions ff = jÖjœ; t = jÖjœ and (5.29), we estimate the internal integrals in (5.28) as follows:
1
Z
0
ff \Gammaß\Gamma1+q j1 \Gamma \Theta(Ö; ff)Öjdff = jÖj \Gammaß+q
1
Z
0
œ \Gammaß\Gamma1+q jg(e \GammaiargÖ œ )jdœ 6 N (ß \Gamma q)jÖj \Gammaß+q (5:32)
1
Z
0
t ß\Gammaq\Gammam\Gamma1 jt + Öj \Gamma1 dt = jÖj ß\Gammaq\Gammam\Gamma1
1
Z
0
œ ß\Gammaq\Gammam\Gamma1 jœ + jÖj \Gamma1 Öj \Gamma1 dœ
6 jÖj ß\Gammaq\Gammam\Gamma1
1
Z
0
œ ß\Gammaq\Gammam\Gamma1
i
maxfsin ' 0 ; jœ + cos ' 0 jg
j \Gamma1
dœ j P (ß; q)jÖj ß\Gammaq\Gammam\Gamma1
(5:33)
for all Ö 2 \Gamma ('') and all q 2 (0; q 0 ). Combining (5.28), (5.32), and (5.33), we obtain J ! 1. By the Fubini
theorem for Bochner integrals ([18, Ch. V, Section 8]), we can change the order of integration in (5.27). Hence,
for all '' 2 (0; '' 1 ]
\Gamma F 0 (x \Lambda ) + ''E
\Delta ß\Gammaq w ('')
q = D(ß; q)
Z
\Gamma ('')
Ö m+1
i 1
Z
0
ff \Gammaß\Gamma1+q \Gamma 1 \Gamma \Theta(Ö; ff)Ö
\Delta dff
j
\Theta
i 1
Z
0
t ß\Gammaq\Gammam\Gamma1 (t + Ö) \Gamma1 dt
j
R
\Gamma Ö; F 0 (x \Lambda ) + ''E
\Delta (x \Lambda \Gamma ¦) dÖ
(5:34)
For a complex z 6= 0 denote
\Lambda(z) = fi 2 C : i = tz; t ? 0g
Letting ff = Öi, we rewrite the first internal integral in (5.34) as
1
Z
0
ff \Gammaß\Gamma1+q (1 \Gamma \Theta(Ö; ff)Ö) dff = Ö \Gammaß+q
Z
\Lambda( ï
Ö)
i \Gammaß\Gamma1+q g(i) di (5:35)
with the integration over \Lambda( ï
Ö) from i = 0 to i = 1. Let G(Ö; ß; q) =
R
\Lambda( ï Ö)
i \Gammaß\Gamma1+q g(i) di. In fact, the value
of G(Ö; ß; q) does not depend on Ö 2 \Gamma ('') . For the proof, take arbitrary Ö 1 ; Ö 2 2 \Gamma ('') such that arg ï
Ö 1 ! arg ï Ö 2
and denote
\Gamma (r;R) ( ï
Ö 1 ; ï
Ö 2 ) = \Gamma r ( ï Ö 1 ; ï Ö 2 ) [ \Gamma R ( ï
Ö 1 ; ï
Ö 2 ) [ \Gamma (r;R) ( ï
Ö 1 ) [ \Gamma (r;R) ( ï
Ö 2 )

Numerical Methods and Programming, 2001, Vol. 2 85
with 0 ! r ! R. Since g(i) is analytic on a domain D 0 oe \Gamma (r;R) ( ï
Ö 1 ; ï Ö 2 ), we have
Z
\Gamma (r;R) ( ï Ö1 ; ï Ö2 )
i \Gammaß\Gamma1+q g(i) di =
Z
\Gamma r ( ï Ö1 ; ï
Ö2 )
i \Gammaß\Gamma1+q g(i) di +
Z
\Gamma R ( ï Ö1 ; ï Ö2 )
+
Z
\Gamma (r;R) ( ï Ö1 )
+
Z
\Gamma (r;R) ( ï
Ö2 )
= 0
Passing to the limits as r ! 0+ and R !1, with the use of (5.30) and (5.31) we get
Z
\Lambda( ï
Ö1 )
i \Gammaß\Gamma1+q g(i) di =
Z
\Lambda( ï Ö2 )
i \Gammaß\Gamma1+q g(i) di
i.e., G(Ö; ß; q) j G(ß; q) does not depend on Ö 2 \Gamma ('') . Similarly, for the second internal integral in (5.34) we
obtain 1
Z
0
t ß\Gammaq\Gammam\Gamma1 (t + Ö) \Gamma1 dt = Ö ß\Gammaq\Gammam\Gamma1
Z
\Lambda( ï
Ö)
i ß\Gammaq\Gammam\Gamma1 (1 + i) \Gamma1 di j Ö ß\Gammaq\Gammam\Gamma1 H(ß; q) (5:36)
From (5.34) -- (5.36) it follows
(F 0 (x \Lambda ) + ''E) ß\Gammaq w ('')
q = D(ß; q)G(ß; q)H(ß; q)
Z
\Gamma ('')
R
\Gamma Ö; F 0 (x \Lambda ) + ''E
\Delta (x \Lambda \Gamma ¦) dÖ = C(ß; q)(x \Lambda \Gamma ¦)
with C(ß; q) = 2‹iD(ß; q)G(ß; q)H(ß; q). Thus, we have obtained the following lemma.
Lemma 5.4. Let Assumptions 2.1, 4.1, 4.2, 5.1 -- 5.6 be fulfilled. Then, equality (5.16) holds with a nonzero
constant C(ß; q).
Now we can now prove the main result of this section. The next assertion is, in some sense, converse to
Theorem 1.
Theorem 5.1. Let Assumptions 2.1, 4.1, 4.2 and 5.1 -- 5.6 be fulfilled. Assume that iterations (4.1) generate
a sequence fxng such that estimate (5.1) holds. Then, for each '' 2 (0; p)
x \Lambda \Gamma ¦ 2 R
\Gamma
F 0 (x \Lambda ) p\Gamma''
\Delta
Proof. The proof will be divided into several steps.
1) First, for all ß 2 (0; p \Gamma 1) and q 2 (0; q 1 ]; q 1 2 (0; q 0 ]; we establish the equality
\Gamma F 0 (x \Lambda ) + '' nE
\Delta ß\Gammaq
w ('' n )
q = F 0 (x \Lambda ) ß\Gammaq w q (5:37)
where w ('')
q and w q are taken from (5.17) and (5.15) and f'' n g is defined in Lemma 5.3. Assuming q to be small
enough, we have ß \Gamma q 2 (m; m + 1); ï 2 (0; 1), where m = [ß \Gamma q] and ï = fß \Gamma qg. Then,
k(F 0 (x \Lambda ) + '' n E) ß\Gammaq w ('' n )
q \Gamma F 0 (x \Lambda ) ß\Gammaq w q kX = k(F 0 (x \Lambda ) + '' n E) m+ï w ('' n )
q \Gamma F 0 (x \Lambda ) m+ï w q kX
6 k(F 0 (x \Lambda ) + '' n E) m+ï \Gamma F 0 (x \Lambda ) m+ï k L(X) kw ('' n )
q kX + kF 0 (x \Lambda ) m+ï k L(X) kw ('' n )
q \Gamma w q kX
(5:38)
Furthermore,
k
\Gamma
F 0 (x \Lambda ) + '' n E
\Delta m+ï
\Gamma F 0 (x \Lambda ) m+ï k L(X)
6 k
\Gamma
F 0 (x \Lambda ) + '' n E
\Delta m
\Gamma F 0 (x \Lambda ) m k L(X) kF 0 (x \Lambda ) ï k L(X)
+
\Gamma
kF 0 (x \Lambda )k L(X) + '' n
\Delta m
k
\Gamma F 0 (x \Lambda ) + '' n E
\Delta ï
\Gamma F 0 (x \Lambda ) ï k L(X)
(5:39)
Note that
k
\Gamma
F 0 (x \Lambda ) + '' n E
\Delta m
\Gamma F 0 (x \Lambda ) m k L(X) 6 c 42 '' n (5:40)
From (4.9) we get
k
\Gamma
F 0 (x \Lambda ) + '' n E
\Delta ï
\Gamma F 0 (x \Lambda ) ï k L(X) 6 c 43 '' ï
n (5:41)
Combining inequalities (5.38) -- (5.41) and using Lemma 5.3, we obtain
lim
n!1
k
\Gamma F 0 (x \Lambda ) + '' n E
\Delta ß\Gammaq w ('' n )
q \Gamma F 0 (x \Lambda ) ß\Gammaq w q kX = 0 (5:42)

86 Numerical Methods and Programming, 2001, Vol. 2
By Lemma 5.4, the element (F 0 (x \Lambda ) + ''E) ß\Gammaq w ('')
q does not depend on ''; therefore, (5.42) implies (5.37).
2) Given an arbitrary ß 2 (0; p \Gamma 1), by Lemma 5.4 for all q 2 (0; q 1 ) we have
x \Lambda \Gamma ¦ = F 0 (x \Lambda ) ß\Gammaq v q ; v q = C(ß; q) \Gamma1 w q
Since
F 0 (x \Lambda ) ff+fi = F 0 (x \Lambda ) ff F 0 (x \Lambda ) fi 8ff; fi ? 0; ff + fi ! 1
(see [14, Ch. I, Section 5]), for each ffi 1 2 (0; p \Gamma 1) there exists v (1) 2 X such that
x \Lambda \Gamma ¦ = F 0 (x \Lambda ) p1 v (1) ; p 1 = p \Gamma 1 \Gamma ffi 1 (5:43)
Then, the above representation can be used again to estimate the third term in the rightíhand side of inequalí
ity (5.2). Just as in the proof of Theorem 3.1, we analyze the two possibilities.
3) If p 1 ? 1, then from (4.25) and (4.30)
k
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (xn ); ff n
\Delta
F 0 (xn )
j
(x \Lambda \Gamma ¦)k X
6 k
i
\Theta
\Gamma
F 0 (x \Lambda ); ff n
\Delta
F 0 (x \Lambda ) \Gamma \Theta
\Gamma
F 0 (xn ); ff n
\Delta
F 0 (xn )
j
F 0 (x \Lambda )k L(X) kF 0 (x \Lambda ) p1 \Gamma1 v (1) kX 6 c 44 ff p
n
The substitution of this estimate into (5.6) yields (cf. (5.8))
\Phi(ff n ) 6 c 45 ff p
n ; n = 0; 1; : : :
As above, this implies that for each ffi 2 (0; p) there exists an element ~ v = ~
v(ffi) 2 X such that x \Lambda \Gamma ¦ =
F 0 (x \Lambda ) ~ p ~
v; ~
p = p \Gamma ffi . Hence, in this case, the assertion of the theorem is true.
4) Suppose now that p 1 2 (0; 1). Without loss of generality, we may let in (4.3) \Gamma n = fl ff n
. Similarly
to (5.4),
k
i
\Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0 (x n ); ff n
\Delta F 0 (xn )
j
(x \Lambda \Gamma ¦)k X
= k
i
\Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda ) \Gamma \Theta
\Gamma F 0 (x n ); ff n
\Delta F 0 (xn )
j
F 0 (x \Lambda ) p1 v (1) kX
6 c 46
Z
fl ff n
j1 \Gamma \Theta(Ö; ff n )Öjk
i
R(Ö; F 0 (x \Lambda )) \Gamma R(Ö; F 0 (xn ))
j
F 0 (x \Lambda ) p1 k L(X) jdÖj
(5:44)
As in (4.28), for all Ö 2 fl ff n we have
k
i
R(Ö; F 0 (x \Lambda )) \Gamma R(Ö; F 0 (xn ))
j
F 0 (x \Lambda ) p1 k L(X) 6 c 47
kxn \Gamma x \Lambda kX
jÖj kR(Ö; F 0 (x \Lambda ))F 0 (x \Lambda ) p1 k L(X) (5:45)
From (4.8) it follows
kR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
F 0 (x \Lambda ) p1 k L(X)
6 c 48
i 2C3 ff n
Z
0
t p1 \Gamma1 kR(Ö; F 0 (x \Lambda ))R(\Gammat; F 0 (x \Lambda ))F 0 (x \Lambda )k L(X) dt
+
1
Z
2C3 ff n
t p1 \Gamma1 kR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
R(\Gammat; F 0 (x \Lambda ))F 0 (x \Lambda )k L(X) dt
j
By (4.2), (4.29) and the equality
R(Ö; F 0 (x \Lambda ))R(\Gammat; F 0 (x \Lambda )) = (Ö + t) \Gamma1 (R(\Gammat; F 0 (x \Lambda )) \Gamma R(Ö; F 0 (x \Lambda )))

Numerical Methods and Programming, 2001, Vol. 2 87
we get
kR
\Gamma
Ö; F 0 (x \Lambda )
\Delta
F 0 (x \Lambda ) p1 k L(X)
6 c 48
i 2C3 ff n
Z
0
t p1 \Gamma1 kR(Ö; F 0 (x \Lambda ))k L(X) kR(\Gammat; F 0 (x \Lambda ))F 0 (x \Lambda )k L(X) dt
+
1
Z
2C3 ff n
t p1 \Gamma1
jÖ + tj k
i
R
\Gamma
\Gammat; F 0 (x \Lambda )
\Delta
\Gamma R(Ö; F 0 (x \Lambda ))
j
F 0 (x \Lambda )k L(X) dt
j
6 c 49
i
ff \Gamma1
n
2C3 ff n
Z
0
t p1 \Gamma1 dt +
1
Z
2C3 ff n
t p1 \Gamma1
jÖ + tj
dt
j
6 c 50 ff p1 \Gamma1
n 8Ö 2 fl ff n
(5:46)
From (5.44) -- (5.46) it follows
k
i
\Theta
\Gamma F 0 (x \Lambda ); ff n
\Delta F 0 (x \Lambda ) \Gamma \Theta(F 0 (x n ); ff n
\Delta F 0 (xn )
j
(x \Lambda \Gamma ¦)k X 6 c 51 ff p+p1 \Gamma1
n
Therefore, by (5.2) and (5.3)
\Phi(ff n ) 6 c 52 ff p+p1 \Gamma1
n (5:47)
The substitution of (5.47) into (5.6) implies that for each ffi 2 2 (0; p \Gamma 1) there exists v (2) 2 X such that
x \Lambda \Gamma ¦ = F 0 (x \Lambda ) p2 v (2) ; p 2 = p \Gamma (1 \Gamma p 1 ) \Gamma ffi 2
Let us emphasize that p 2 ? p 1 . We can now iterate the process of improving estimate (5.8).
5) In a general form, the process of iterative estimates for \Phi(ff n ) looks as follows. Choose a sequence fffi k g
such that ffi k 2 (0; p \Gamma 1) and lim
k!1
ffi k = 0. Then, at kth step we have
\Phi(ff n ) 6 c 53 ff p+pk \Gamma1
n
which yields the representation
x \Lambda \Gamma ¦ = F 0 (x \Lambda ) pk+1 v (k+1) ; v (k+1) 2 X (5:48)
with
p k+1 = p \Gamma (1 \Gamma p k ) \Gamma ffi k+1 ; ffi k+1 2 (0; p \Gamma (1 \Gamma p k )) (5:49)
If p k ? 1, then due to (5.48) and (5.49)
x \Lambda \Gamma ¦ = F 0 (x \Lambda ) p\Gammaffi k+1 ~ v (k+1)
with ~ v (k+1) = F 0 (x \Lambda ) pk \Gamma1 v (k+1) , and the process stops. Since ffi k+1 can be chosen arbitrarily small, the assertion
of this theorem is valid.
6) To complete the proof, it is sufficient to show that certainly p k ? 1 at a finite step k = k 0 . One can do
it by the same arguing as in Theorem 3.1. In fact, assume the contrary, i.e., p k ! 1 for all numbers k = 1; 2; : : :.
From (5.49) we conclude that p k ! p k+1 . Therefore, the sequence fp k g possesses a limit: lim
k!1
p k = ~
p 6 1.
Passing to the limit in (5.49), we come to the equality p = 1, which contradicts the assumption that p ? 1.
This ends the proof.
Remark 5.1. Theorem 5.1 states that sourcewise representation (4.7) is almost necessary for estimate
(5.1). The loss of equivalence between (4.7) and (5.1) arises when from (5.7) we deduce equality (5.43) with
ffi 1 ? 0. This raises the question of whether (5.43) remains valid with ffi 1 = 0. The answer is, in general, negative
(see [7, 19] for counterexamples).
6. Examples of the iterative methods in Banach spaces. In this section we present a number of
generating functions \Theta(Ö; ff) for which the above assumptions are fulfilled.
1) We begin with function (2.31). Following [6, Section 4.5], we define the contours f\Gamma ff g ff2(0;1) as
\Gamma ff = fr (SR0 (0)nS (1\GammaC 3 )ff (\Gammaff)); ff ? 0 (6:1)
It is easy to see that Assumptions 4.2, 5.1, and 5.4 are fulfilled with D ff = Cnf\Gammaffg provided that
1 \Gamma sin(maxf' 0 ; ‹
2 g) ! C 3 ! 1 (6:2)

88 Numerical Methods and Programming, 2001, Vol. 2
Straightforward calculations prove that the other assumptions are also fulfilled with \Gamma ff defined by (6.1) and
Ó
p = 1 (see Assumption 4.4). In this case, the process (4.1) takes the form:
\Gamma F 0 (x n ) + ff nE
\Delta (x n+1 \Gamma ¦) = F 0 (x n )(x n \Gamma ¦) \Gamma F (x n ) (6:3)
Note that linear equation (6.3) is wellíposed for all n = 0; 1; : : :.
2) Consider function (2.32). Suppose (6.2) holds. Assumptions 4.2, 5.1, and 5.4 can be verified immedií
ately. Turning to Assumptions 4.3 and 4.4, we define f\Gamma ff g ff2(0;1) by (6.1). It is not difficult to prove that
inequality (4.18) is true and Assumptions 4.4 and 5.6 are valid with Ó
p = N and p 2 (0; N ). Analyzing the
integrals in (5.10) and (5.13), we see that (5.10) holds with r 0 = 1 and (5.13) is true for each s 0 ? 0. Hence,
Assumptions 5.2 and 5.3 are fulfilled. Assumption 5.5 holds with an arbitrary t 0 ? 0 by the inequalities
M (ff; '') 6 c 54
''
ff + ''
(1 + j ln ffj) 8'' 2 (0; '' 1 ]; ff 2 (0; 1)
Z '' 1
0
M (ff; '')
''
d'' 6 c 55 (1 + j ln ffj 2 ) 8ff 2 (0; 1)
An iteration of the method (2.32), (4.1) can be written as a finite iterative process: xn+1 = x (N)
n+1 with x (0)
n+1 = ¦
and fx (k)
n+1 g defined by the linear wellíposed equations
(F 0 (xn ) + ff nE)x (k+1)
n+1 = ff nx (k)
n+1 + F 0 (xn )x n \Gamma F (x n ); k = 0; 1; : : : ; N \Gamma 1
We emphasize that inequality (4.21) is violated when œ ? N . Hence, Theorem 4.1 does not ensure the converí
gence of method (2.32), (4.1) with exponent p in (4.34) greater than N even if x \Lambda \Gamma ¦ = F 0 (x \Lambda ) q v; v 2 X; with
arbitrarily large q ? N (the saturation phenomenon). Let us consider an example of iterative procedure (4.1)
free of this drawback.
3) Consider generating function (2.33), which is analytic on the whole complex plane. Suppose Assumption
4.1 is valid with ' 0 2 (0; ‹
2 ). The verification of Assumptions 4.2, 5.1, and 5.4 is trivial. Assumptions 4.3
and 4.4 can easily be verified with \Gamma ff = fl ff ; ff ? 0. Notice that Assumptions 4.4 and 5.6 are now fulfilled
without any restrictions on p ? 1. Hence, method (2.33), (4.1) is in fact nonsaturating. Assumption 5.3 holds
with an arbitrary s 0 ? 0; since
sup
ff2(0;1)
Z
fl ff
j1 \Gamma \Theta(Ö; ff)Öj
jÖj jdÖj ! 1
Immediate calculations make certain that Assumption 5.2 is true with r 0 = 1. At last, Assumption 5.5 is
fulfilled by virtue of the estimates
M (ff; '') 6 c 56 (1 \Gamma e \Gamma ''
ff )(1 + j ln ffj) 8'' 2 (0; '' 1 ]; ff 2 (0; 1)
Z '' 1
0
M (ff; '')
''
d'' 6 c 57 (1 + j ln ffj 2 ) 8ff 2 (0; 1)
An iteration of method (2.33), (4.1) can practically be implemented as follows: xn+1 = u(ff \Gamma1
n ); where u = u(t)
is a solution of the Cauchy problem
du(t)
dt
+ F 0 (xn )u(t) = F 0 (xn )xn \Gamma F (x n ); u(0) = ¦
4) We conclude with a further example of the function \Theta(Ö; ff) that generates iterative method (4.1) free of
the saturation phenomenon. Take function (2.34). The regularization parameter ff ? 0 is assumed to take the
values 1; 1
2 ; 1
3 ; : : :. Since \Theta(Ö; ff) is analytic on C, Assumption 4.2 is valid. Note that the sequence ff n = (n+1) \Gamma1
satisfies Assumption 2.1. Let Assumption 4.1 be fulfilled with ' 0 2 (0; ‹
4 ). Then, taking \Gamma ff = fl ff ; ff ? 0 and
applying a method similar to those from [6, Section 4.5], we get that Assumption 4.4 is valid whatever p ? 1
provided that ï 0 is small enough. It is not difficult to prove that the remaining assumptions are also fulfilled,
except for Assumption 5.6. Nevertheless, a slight modification of the arguing from (5.28) -- (5.34) makes certain
that Theorem 5.1 is true for function (2.34). An iteration of method (2.34), (4.1) can be written as xn+1 = x (n)
n+1
with x (0)
n+1 = ¦ and fx (k)
n+1 g constructed iteratively as
x (k+1)
n+1 = x (k)
n+1 \Gamma ï 0 (F 0 (x n )x (k)
n+1 \Gamma F 0 (xn )xn + F (xn )); k = 0; 1; : : : ; n \Gamma 1
This work was partially supported by the Russian Foundation for Basic Research (99--01--00055).

Numerical Methods and Programming, 2001, Vol. 2 89
REFERENCES
1. A.B. Bakushinsky and M.Yu. Kokurin, ``Sourcewise representation conditions and rate of convergence of
solution methods for ill--posed operator equations. Part I'', Numerical Methods and Programming, 1, 1:
62--82 (http://numímeth.srcc.msu.su)
2. A.N. Tikhonov and V.Ya. Arsenin, Solutions of IllíPosed Problems, New York, 1977.
3. A.N. Tikhonov, A.S. Leonov, and A.G. Yagola, Nonlinear IllíPosed Problems, Volumes 1, 2, London, 1998.
4. V.K. Ivanov, V.V. Vasin, and V.P. Tanana, Theory of Linear IllíPosed Problems and its Applications (in
Russian), Moscow, 1978.
5. A.B. Bakushinsky and A.V. Goncharsky, Iterative Methods of Solving IllíPosed Problems (in Russian),
Moscow, 1989.
6. A. Bakushinsky and A. Goncharsky, IllíPosed Problems: Theory and Applications. (Mathematics and Its
Applications), Dordrecht, 1994.
7. H.W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems. (Mathematics and Its
Applications), Dordrecht, 1996.
8. A.B. Bakushinsky, ``Iterative methods without saturation for solving degenerated nonlinear operator equaí
tions'', Doklady Ross. Akad. Nauk, 344, 1: 7--8, 1995.
9. A.B. Bakushinsky, ``Iterative methods for solving nonlinear operator equations without the property of
regularity'', Fundamentalnaya i Prikladnaya Matemetika, 3, 3: 685--692, 1997.
10. A.B. Bakushinsky, ``On the rate of convergence of iterative processes for nonlinear operator equations'',
Zhurnal Vichislitelnoy Matematiki i Matematicheskoy Fiziki, 38, 4: 559--663, 1998.
11. G.M. Vainikko G.M. and A.Yu. Veretennikov, Iterative Procedures in IllíPosed Problems (in Russian),
Moscow, 1986.
12. F. Riesz and B. Sz.íNagy, Lecons D'analyse Fonctionnelle (in French), Budapest, 1968.
13. M.A. Krasnoselsky, P.P. Zabreiko, E.I. Pustylnik, and P.E. Sobolevsky, Integral Operators in Spaces of
Summable Functions, Zeyden, Noordhoff International Publishing, 1976.
14. S.G. Krein, Linear Differential Equations in Banach Spaces, Providence, 1971.
15. N. Dunford and J.T. Schwartz, Linear Operators. Spectral Operators, New York, 1971.
16. Ph. Clement, H.J.A.M. Heijmans, S. Angenent, C.J. van Duijn, and B. de Pagter, OneíParameter Semií
groups, Amsterdam, 1987.
17. A.B. Balakrishnan, ``Fractional powers of closed operators and the semigroups generated by them'', Pacific
Journal of Mathematics, 10, 2: 419--437, 1960.
18. N. Bourbaki, Elements des Mathematiques, Integration; Chapitre 5: des Measures , Paris, 1955.
19. M.Yu. Kokurin, Operator Regularization and Investigation of Nonlinear Monotone Problems (in Russian),
YoshkaríOla, 1998.
20. M.Yu. Kokurin and N.A. Yusoupova, ``On sufficient conditions of qualified convergence of methods for
solving linear illíposed problems'', Izvestija Vuzov. Matematika, 1, 2000. (to appear)
21. M.Yu. Kokurin, ``Sourcewise representation conditions and estimates for the rate of convergence of reguí
larization methods for linear equations in Banach spaces. I, II'', Izvestja Vuzov. Matematika. I, 12: 2000;
II. 2001. (to appear)
22. M.Yu. Kokurin and N.A. Yusoupova, ``On nondegenerate estimates for the rate of convergence of iterí
ative methods for solving illíposed nonlinear operator equations'', Zhurnal Vichislitelnoy Matematiki i
Matematicheskoy Fiziki, 40, 6: 793--798, 2000.

90 Numerical Methods and Programming, 2001, Vol. 2
23. A.B. Bakushinsky, M.Yu. Kokurin, and N.A. Yusoupova, ``Necessary conditions of convergence of iteraí
tive methods for solving operator equations without the property of regularity'', Zhurnal Vichislitelnoy
Matematiki i Matematicheskoy Fiziki, 40, 7: 945--954, 2000.
24. A.B. Bakushinsky and M.Yu. Kokurin, ``Iterative methods for solving nonlinear irregular operator equations
in Banach spaces'', Numerical Functional Analysis and Optimization, 21, 3--4: 355--378, 2000.
25. A.B. Bakushinsky and M.Yu. Kokurin, ``Sourcewise representation of solutions for nonlinear operator
equations in Banach spaces and estimates for the rate of convergence of regularized Newton's method'',
Journal of Inverse and IllíPosed Problems, 2000. (to appear)
27 March 2001