Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://www.stsci.edu/stsci/meetings/irw/proceedings/snyderd.ps.gz
Äàòà èçìåíåíèÿ: Mon May 23 18:45:13 1994
Äàòà èíäåêñèðîâàíèÿ: Sun Dec 23 05:19:57 2007
Êîäèðîâêà: IBM-866

Ïîèñêîâûå ñëîâà: ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï ï
The Restoration of HST Images and Spectra II
Space Telescope Science Institute, 1994
R. J. Hanisch and R. L. White, eds.
Compensation for ReadíOut Noise in HST Image Restoration
Donald L. Snyder
Washington University, Department of Electrical Engineering, Box 1127, St. Louis, MO
63130
Carl W. Helstrom
University of California, San Diego CA 92037
Aaron D. Lanterman and Mohammad Faisal
Washington University, St. Louis, MO 63130
Richard L. White
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218
Abstract. Data acquired with the charge coupled device camera on the HST are modeled
as an additive PoissoníGaussian mixture, with the Poisson component representing cumuí
lative counts of objectídependent photoelectrons, objectíindependent photoelectrons, bias
electrons and thermoelectrons, and the Gaussian component representing readíout noise.
Two methods are examined for compensating for readíout noise. One method is based
upon approximating the Gaussian readíout noise by a Poisson noise and then using the
expectationímaximization (modiîed RichardsoníLucy) algorithm for Poisson distributed
data to effect the compensation. This method has been used for restoring HST images.
The second method directly uses the expectationímaximization algorithm derived for the
PoissoníGaussian mixture data. This requires the determination of the conditionalímean
estimate of the Poisson component of the mixture, which is accomplished by the evaluation
of a nonlinear function of the data. The second method requires more computation than the
îrst, but modest improvements in the quality of the restorations are realized, particularly for
fainter objects.
1. Introduction
Various methods are being used to compensate for the resolution loss that is present in image data
acquired by the Hubble Space Telescope. The common goal of these methods is to invert the excess
point spreading caused by spherical aberration in the primary mirror so as to restore the images to
their original design resolution. Inverse problems of this type are notoriously illíposed. Not only can
noise ampliîcation cause serious degradation as improved resolution is sought for such problems
but, also, the details of the implementation of any restoration method can have a signiîcant impact
on the quality of the restoration, with seemingly insigniîcant design choices having the potential to
cause large changes in the result.
There are a variety of noise sources present in HST image data acquired with a CCD camera.
The photoíconversion process by which object light is converted into photoelectrons introduces
objectídependent noise characterized statistically as a Poisson random process. Nonideal effects
introduce extraneous electrons that are indistinguishable from objectídependent photoelectrons.
Examples of this excess #noise# include objectíindependent photoelectrons, bias or #fat zero#
electrons, and thermoíelectrons. We use the term background counts for the cumulative effect of
139

140 Snyder et al.
these extraneous electrons. Background counts are Poisson distributed. Readíout noise further
contributes to the degradation of images acquired with the charge coupled device camera aboard
the HST. This noise is characterized as a Gaussian random process.
We use the following mathematical model due to Snyder, Hammoud, and White (1993) for
describing CCD image data acquired by the HST:
r(j) = n obj (j) + n 0 (j) + g(j); j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1 (1)
where r(j) are the data acquired by reading out pixel j of the CCDícamera array, n obj (j) is the
number of objectídependent photoelectrons, n 0 (j) is the number of background electrons, g(j) is
readout noise, and J is the number of pixels in the CCD camera array. The statistical description
of these quantities is as follows. The random variables n obj (j), n 0 (j), and g(j) are statistically
independent of each other and of n obj (k), n 0 (k), and g(k) for j 6= k. Objectídependent counts
fn obj (j); j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1g form a Poisson process with meanívalue function fï obj (j); j =
0; 1; \Delta \Delta \Delta ; J \Gamma 1g, where
ï obj (j) = fi(j)
I \Gamma1
X
i=0
p(jji)Ö(i) ; (2)
where ffi(j); j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1g accounts for nonuniform #atíîeld response, detector efîciency,
and the spatial extent of the detector array,
fp(jji); i = 0; 1; \Delta \Delta \Delta ; I \Gamma 1; j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1g
is the HST point spread function (PSF), fÖ(i); i = 0; 1; \Delta \Delta \Delta ; I \Gamma 1g is the object's intensity
function. The #atíîeld response function fi(\Delta) is assumed to be known, for example, through
a #atíîeld calibration measurement. The PSF p(\Deltaj\Delta) is also assumed to be known, for example,
from a theoretical model of HST optics or through observations of an unresolved star. Backí
ground counts fn 0 (j); j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1g form a Poisson process with meanívalue function
fï 0 (j); j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1g; this meanívalue function is assumed to be known, for example,
through a darkíîeld calibration measurement or from HST images taken near but not including the
object of interest. Readíout noise fg(j); j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1g forms a sequence of independent,
identically distributed, Gaussian random variables with mean m and standard deviation oe; the
constants m and oe are assumed to be known.
One approach for compensating for blurring while recognizing the statistical properties of noise
encountered in charge coupled device cameras is based on the method of maximumílikelihood
estimation, as discussed by Snyder, Hammoud, and White (1993). This leads to the following
iteration for producing a maximumílikelihood estimate of the object's intensity function:
Ó
Ö new (i) = Ó Ö old (i)
1
ï
fi(i)
J \Gamma1
X
j=0
ß fi(j)p(jji)
Ó
ï old (j)
Ö
F
h
r(j); Ó
ï old (j); oe
i
; (3)
where
ï
fi(i) =
J \Gamma1
X
j=0
p(jji)fi(j) ; (4)
Ó
ï old (j) = fi(j)
I \Gamma1
X
i=0
p(jji) Ó Ö old (i) + ï 0 (j) ; (5)
and where
F
\Theta
r(j); ï; oe
\Lambda
= E
h
n obj (j) + n 0 (j)jr; Ó Ö old
i
= E
h
r(j) \Gamma g(j)jr; Ó Ö old
i
(6)

Compensation for ReadíOut Noise 141
is the conditionalímean estimate of n obj (j) + n 0 (j) in terms of the data r and the intensity estimate
Ó Ö old to be updated. Evaluation of the conditional mean (6) yields
F [r; ï; oe] = ï
p(r \Gamma m \Gamma 1 : ï; oe)
p(r \Gamma m : ï; oe)
; (7)
where p(x : ï; oe) is the Poisson‘Gaussian mixture probability density given by
p(x : ï; oe) =
1
X
n=0
1
n!
ï n e \Gammaï 1
p
2‹oe
exp
ß
\Gamma
1
2oe 2 (x \Gamma n) 2
Ö
: (8)
This iteration is used as follows:
STEP 0. Initialization: select Ó Ö old (\Delta)
STEP 1. Compute Ó Ö new (\Delta) using (3)
STEP 2. If done, display Ó
Ö new (\Delta) else Ó
Ö old (\Delta) Ó Ö new (\Delta) and perform STEP 1
Special cases of the iteration (3) have appeared previously in the literature. If fi(j) = 1 and
ï 0 (j) = 0 for all j, then (3) is the same as that given by Llacer and N#u#nez (1990). As oe tends
to zero, the Gaussian exponentials in Eq. (7) become very concentrated about n = r \Gamma m, and
F [r; ï; oe] tends towards r \Gamma m, in which case the iteration (3) is that of Politte and Snyder (1991)
for image recovery from Poissonídistributed data in the presence of nonuniform detector response
and background counts. If, in addition to oe = 0, there hold fi(j) = 1 and ï 0 (j) = 0 for all j,
then (3) becomes the RichardsoníLucy iteration (Richardson 1972, Lucy 1974) and the SheppíVardi
(1982) expectationímaximization algorithm for Poissonídistributed data.
2. Implementation
The computational burden of implementations of the iteration (3) has motivated the use of approxí
imations for the function (7). However, as we have commented, there is a potential for errors
introduced by approximations to be magniîed when realizing solutions to illíposed stochastic
inverseíproblems. For this reason, we have studied two implementations of (3). The îrst, which is
already in use for restoring HST images, relies on an approximation to F [r; ï; oe] obtained by subí
stituting a Poisson variate for the readíout noise g(j) in Eq. (1). The second relies on an evaluation
of F [r; ï; oe] via a saddlepoint approximation.
We describe these two approaches in the remainder of this section. Results we have obtained
using them are presented and compared in the next section.
2.1. Implementation via Poisson Approximation
For this approximation, we follow the discussion given by Snyder, Hammoud, and White (1993,
p. 1017). The readíout noise g(j) from the jth pixel is a Gaussianídistributed random variable
with mean m and standard deviation oe. If the data r(j) are modiîed to form r(j) \Gamma m + oe 2 ,
thereby converting the readíout noise to g(j) \Gamma m+ oe 2 , which is Gaussian distributed with mean
and variance equal to oe 2 , and if oe 2 is large, then according to Feller (1968), g(j) \Gamma m + oe 2 is
approximately equal in distribution to a Poissonídistributed random variable with mean oe 2 . With
this approximation, the CCD data (1) become
r(j) \Gamma m+ oe 2 = n obj (j) + n 0 (j) + n read\Gammaout (j); j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1 (9)
where n read\Gammaout (j) is Poisson distributed with mean oe 2 . Under this approximation, the data
\Phi
r(j) \Gamma m+ oe 2 ; j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1
\Psi form a Poisson process with meanívalue function
n
ï obj (j) + ï 0 (j) + oe 2 ; j = 0; 1; \Delta \Delta \Delta ; J \Gamma 1
o
:

142 Snyder et al.
As there is now no Gaussian readíout noise, the iteration (3) becomes
Ó
Ö new (i) = Ó
Ö old (i)
1
ï
fi(i)
J \Gamma1
X
j=0
''
fi(j)p(jji)
fi(j)
P I \Gamma1
i=0 p(jji) Ó Ö old (i) + ï 0 (j) + oe 2
# h
r(j) \Gamma m+ oe 2
i
; (10)
which is the RichardsoníLucy iteration modiîed to accommodate nonuniformity of #atíîeld
detectoríresponse and background noise. This is equivalent to making the following approximation
to F [r; ï; oe] in Eq. (7) when the standard deviation of the readíout noise is large:
F [r; ï; oe] ‹ ï
ï + oe 2
i
r \Gamma m+ oe 2
j
: (11)
We note also from Eq. (7) that as oe tends to zero, F [r; ï; ffi] tends towards r \Gamma m, suggesting that
the approximation (11) for large values of oe will be a good choice for small values of oe as well.
2.2. Implementation via Saddlepoint Approximation
An alternative to the Poisson approximation is to evaluate the function (7) as needed for use in
Eq. (3). However, this function depends on r, ï, and oe in a complicated way, so some numerical
method of evaluation is necessary. We have used the saddlepoint integration and saddlepoint
approximation methods introduced by Helstrom (1978) for calculating tail probabilities.
The Laplace transform of the PoissoníGaussianímixture density (8) is
L[p(\Delta : ï; oe)](s) =
Z 1
\Gamma1
p(x : ï; oe)e \Gammasx dx = exp[ï(e \Gammas \Gamma 1) +
1
2 oe 2 s 2 ] ; (12)
and, by the inverse Laplace transform,
p(x : ï; oe) =
1
2‹j
Z c+j1
c\Gammaj1
exp[\Phi(s)]ds ; (13)
where
\Phi(s) = xs +
1
2 oe 2 s 2 + ï(e \Gammas \Gamma 1) : (14)
The saddlepoint integration and saddlepoint approximation methods can be used with (13) to
produce accurate evaluations of the function F [r; ï; oe] in (7). The saddlepoint ~
s, which is real, is
the root of
\Phi 0 (~s) =
d\Phi(s)
ds
j s=~s = x + oe 2 ~ s \Gamma ïe \Gamma~s = 0 : (15)
This root can be determined numerically using a Newton iteration,
~
s new = ~
s old \Gamma \Phi 0 (~s old )
\Phi 00 (~s old )
; (16)
where \Phi 00 (~s) = oe 2 + ïe \Gamma~s . We iterate Eq. (16), from starting values described below, until
fi fi \Gamma
~ s new \Gamma ~ s old
\Delta
=~s new
fi fi ! ffl, where we have used ffl = 10 \Gamma6 as the tolerance parameter. The saddlepoint
approximation to Eq. (13) is
p(x : ï; oe) ‹
1
p
2‹\Phi 00 (~s)
e \Phi(~s) : (17)

Compensation for ReadíOut Noise 143
Starting Values Useful starting values for the Newton iteration (16) can be obtained by making
approximations to the exponential in Eq. (15). For example, replacing the exponential by two terms
of its Taylor series,
e \Gamma~s ‹ 1 \Gamma ~ s ; (18)
and then solving x + oe 2 ~ s \Gamma ï(1 \Gamma ~ s) = 0 suggests using
~
s old =
ï \Gamma x
ï + oe 2 (19)
as a starting value. Although this is adequate for some values of x and ï, for low values of x
combined with large values of ï, this resulted in starting values so far from the saddlepoint that
hundreds of Newton iterations were required for convergence (see Table 1). We have developed
two alternative approaches for establishing more efîcient starting values. One of these is obtained
via the Pad#e approximation (Newton, Gould, and Kaiser 1961):
e \Gamma~s ‹
1 \Gamma 1
2 ~
s
1 + 1
2 ~
s
: (20)
Solving this for ~ s in terms of e \Gamma~s and substituting the result into Eq. (15) suggests using
~
s old = \Gamma ln
2
4
\Gamma
x \Gamma 2oe 2 \Gamma ï
\Delta
+
q \Gamma
x \Gamma 2oe 2 \Gamma ï
\Delta 2
+ 4ï
\Gamma
x + 2oe 2
\Delta

3
5 (21)
as the starting value, where the sign of the radical is selected to make the argument of the logarithm
positive. Although the Pad#e approximation (20) is inaccurate for j~sj ? 2, we have found that the
Newton iterations (16) started with (21) converge, to within a tolerance parameter ffl = 10 \Gamma6 , in no
more than îve iterations for x and ï ranging from 10 \Gamma1 to 10 5 for both oe = 3 and oe = 13 (see
Table 1). For this reason, we have generally used starting values based on this approximation in all
of our studies for a readíout noise level of oe = 13 electrons. Another alternative is to use a small
number of Picard iterations
~
s new = ln
ß ï
x + oe 2 ~
s old
Ö
(22)
to solve Eq. (15). We start these iterations using (19) and only perform a small number of them,
typically three, before converting to Newton iterations. Also, if the argument of the logarithm in
(22) becomes negative during the Picard iterations, we revert to using (19) to start the Newton
iterations. With this approach, no more than îve Newton iterations were required for convergence
to within a tolerance parameter of ffl = 10 \Gamma6 in evaluating either the numerator or denominator of
(7), and often one or two sufîced.
Shown in Table 1 are the starting values and the numbers of Newton iterations required for
convergence to a tolerance parameter of ffl = 10 \Gamma6 for each of the Taylor, Pad#e, and Picard methods
for determining starting values for oe = 13 and a wide range of values of ï and r \Gamma m. While a
variety of starting values and numbers of Newton iterations are seen, all three methods converged
to the same saddlepoints to the degree of numerical precision shown in the table. Both the Pad#e and
Picard methods required no more than 5 Newton iterations, while the Taylor method often required
many more.
Accommodation of Wide DynamicíRange Because of the wide dynamic range encountered in
HST images, with values of ï ranging over several orders of magnitude, it is necessary to modify
Eq. (7) to avoid over#ow and under#ow, writing it as
F [r; ï; oe] = ï exp[ln p(r \Gamma m \Gamma 1 : ï; oe) \Gamma ln p(r \Gamma m : ï; oe)] ; (23)

144 Snyder et al.
Table 1. Comparison of starting values for Newton iterations.
log 10 ï log 10 (r \Gamma m) Taylor: Pad#e: Picard: sadípoint
Starting Newtons Starting Newtons Starting Newtons
0 0 0.000e+00 1 í0.000e+00 1 0.000e+00 1 0.000e+00
0 1 í5.294eí02 2 í5.295eí02 2 í5.294eí02 2 í5.293eí02
0 2 í5.824eí01 2 í5.982eí01 3 í5.824eí01 2 í5.811eí01
0 3 í5.876e+00 5 í6.497e+00 5 í5.876e+00 5 í5.021e+00
0 4 í5.882e+01 54 í9.176e+00 4 í9.043e+00 2 í9.044e+00
0 5 í5.882e+02 581 í1.151e+01 3 í1.149e+01 2 í1.149e+01
1 0 5.028eí02 2 5.036eí02 2 5.396eí01 4 5.035eí02
1 1 0.000e+00 1 í0.000e+00 1 0.000e+00 1 0.000e+00
1 2 í5.028eí01 3 í5.042eí01 3 í5.028eí01 3 í4.947eí01
1 3 í5.531e+00 6 í4.208e+00 5 í3.354e+00 4 í3.647e+00
1 4 í5.581e+01 54 í6.873e+00 3 í6.785e+00 2 í6.786e+00
1 5 í5.586e+02 554 í9.207e+00 3 í9.195e+00 1 í9.195e+00
2 0 3.680eí01 3 3.970eí01 3 8.725eí01 4 3.934eí01
2 1 3.346eí01 3 3.582eí01 3 6.895eí01 4 3.555eí01
2 2 0.000e+00 1 í0.000e+00 1 0.000e+00 1 0.000e+00
2 3 í3.346e+00 6 í2.005e+00 4 í1.886e+00 3 í1.912e+00
2 4 í3.680e+01 37 í4.572e+00 3 í4.525e+00 2 í4.526e+00
2 5 í3.714e+02 369 í6.904e+00 3 í6.896e+00 1 í6.896e+00
3 0 8.546eí01 5 1.523e+00 4 1.661e+00 4 1.422e+00
3 1 8.469eí01 5 1.495e+00 4 1.615e+00 4 1.400e+00
3 2 7.699eí01 4 1.248e+00 3 1.279e+00 4 1.197e+00
3 3 0.000e+00 1 í0.000e+00 1 0.000e+00 1 0.000e+00
3 4 í7.699e+00 10 í2.275e+00 3 í2.264e+00 2 í2.264e+00
3 5 í8.469e+01 85 í4.602e+00 3 í4.597e+00 1 í4.597e+00
4 0 9.833eí01 6 3.448e+00 4 3.096e+00 4 2.985e+00
4 1 9.824eí01 6 3.422e+00 4 3.077e+00 4 2.972e+00
4 2 9.735eí01 6 3.191e+00 4 2.908e+00 3 2.846e+00
4 3 8.850eí01 6 2.070e+00 3 2.013e+00 3 2.010e+00
4 4 0.000e+00 1 í0.000e+00 1 0.000e+00 1 0.000e+00
4 5 í8.850e+00 11 í2.300e+00 2 í2.299e+00 1 í2.299e+00
5 0 9.983eí01 8 5.694e+00 5 4.871e+00 3 4.811e+00
5 1 9.982eí01 8 5.667e+00 5 4.860e+00 3 4.802e+00
5 2 9.973eí01 8 5.437e+00 5 4.754e+00 3 4.714e+00
5 3 9.883eí01 8 4.321e+00 4 4.084e+00 3 4.081e+00
5 4 8.985eí01 6 2.275e+00 3 2.265e+00 2 2.265e+00
5 5 0.000e+00 1 í0.000e+00 1 0.000e+00 1 0.000e+00
where in the saddlepoint approximation
ln p(x : ï; oe) ‹ \Phi(~s) \Gamma 1
2
ln[2‹\Phi 00 (~s)] ; (24)
Since the factors of 2‹ cancel in determining (7), we omit these in our computations.
Accuracy of the Approximations The saddlepoint integration of Eq. (13) is performed by integratí
ing along the vertical line c = ~ s + jy, \Gamma1 ! y ! 1, through the saddlepoint to obtain
p(x : ï; oe) =
1

e \Gammaï
Z 1
0
Re
ae
exp
ß
x(~s + jy) +
1
2 (~s + jy) 2 + ïe \Gamma(~s+jy)
Öoe
dy : (25)
The trapezoidal rule was used, taking the step size as \Deltay =
\Theta
\Phi 00 (~s)
\Lambda \Gamma1=2 and stopping the summation
when the absolute value of the integrand fell below ffi \Deltay times the accumulated sum, with ffi = 10 \Gamma5 .
Smaller stepísizes did not noticeably alter the results, which were also checked by evaluating Eq. (7)
with the summation in Eq. (8); the results agreed to at least six signiîcant îgures. The discrepancies
may be attributable to roundíoff error in the summations in Eq. (8).
Shown in Table 2 are values of F [r : ï; oe] obtained for the methods of saddlepoint integration
(Eq. 25), saddlepoint approximation (Eq. 17), and Poisson approximation (Eq. 11) for oe 2 = 200
and several values of r \Gamma m and ï. One purpose of this table is to compare #exact" values obtained
with saddlepoint integration to values obtained with the saddleípoint and Poisson approximations.

Compensation for ReadíOut Noise 145
Starting values for the Newton iteration were determined by the Picard method. The saddlepoint
approximation is seen to agree closely with the results of saddlepoint integration. If we take the
values of F [r : ï; oe] obtained in Table 2 by saddlepoint integration as #exact," then the sum of
absolute errors for the saddlepoint and Poisson approximations differ by several orders of magnitude,
being 0.0004 and 19.81, respectively.
Table 2. Evaluations of F (\Delta).
oe 2 ï r \Gamma m spíint spíapprox Píapprox
200 20 í55.0 14.122700 14.122702 13.181818
200 20 í25.0 16.236131 16.236133 15.909091
200 20 5.0 18.638897 18.638900 18.636364
200 20 35.0 21.362958 21.362960 21.363636
200 20 65.0 24.441810 24.441813 24.090909
200 20 95.0 27.910075 27.910078 26.818182
200 200 100.0 153.126879 153.126854 150.000000
200 200 140.0 171.028212 171.028184 170.000000
200 200 180.0 190.001082 190.001050 190.000000
200 200 220.0 209.998996 210.998963 210.000000
200 200 260.0 230.972066 230.972031 230.000000
200 200 300.0 252.868626 252.868590 250.000000
200 500 350.0 396.313041 396.313010 392.857143
200 500 400.0 430.003993 430.003964 428.571429
200 500 450.0 464.555108 464.555081 464.285714
200 500 500.0 499.897950 499.897926 500.000000
200 500 550.0 535.970081 535.970060 535.714286
200 500 600.0 572.714680 572.714662 571.428571
200 500 650.0 610.080121 610.080105 607.142857
Shown in Table 3 are values of F [r : ï; oe] obtained for the methods of saddlepoint approxií
mation, using Eqs. (23) and (24), and Poisson approximation (Eq. 11) with oe = 13, corresponding
to the level of readíout noise present with the HSTs CCD camera, and oe = 3, corresponding to
possible values of readíout noise present with improved CCD cameras, for a wide range of values of
r and ï. The purpose of this table is to compare values obtained with the saddlepoint and Poisson
approximations over a dynamic range encountered in HST image data. In this table, j\DeltaF j is the
magnitude of the value of the difference in values of F determined by the saddlepoint and Poisson
approximations.
The saddlepoint and Poisson approximations are seen from these tables to produce practically
identical results for ï ‹ r \Gamma m. Elsewhere, the Poisson result is consistently less than the saddlepoint
result, with the difference worsening as r \Gamma m strays further and further from ï. For r \Gamma m ! ï, the
error tends to level off to some value, whereas for r \Gamma m ? ï, the error rises rapidly and somewhat
proportionately to r \Gamma m. The percentage error can be quite large, especially when r \Gamma m is much
larger than ï. For instance, for ï = 1, oe = 13 and r \Gamma m = 10 5 , j\DeltaF j/(spíapprox)=99.4%.
In performing Newton iterations to evaluate F (r; ï; oe) in Eq. (7), the saddlepoint for the
denominator was evaluated îrst, and the saddlepoint then determined was used as the starting value
for Newton iterations in evaluating the numerator. Except when x was close to ï, the number of
Newton iterations for the numerator was then no larger than for the denominator and was usually
smaller.
2.3. Computer Implementation
Serial Implementation AC subroutine to compute F [r; ï; oe] using the saddlepoint approximation
with a Newton iteration to determine the saddlepoint is given in the Appendix for starting values
obtained with either the Pad#e or Picard methods. With ï = 10 2 and r \Gamma m = 10 3 , which are typical
values, 2 16 calls of these subroutines take approximately 16 seconds on a SUN Sparcstation IPC
and 8 seconds on a SUN Sparcstation IPX. These times can probably be lowered substantially while
maintaining nearly the same accuracy by using a larger value for the tolerance parameter ffl.

146 Snyder et al.
Table 3. More Evaluations of F (\Delta).
log 10 ï log 10 (r \Gamma m) spíapprox Píapprox j\DeltaF j spíapprox Píapprox j\DeltaF j
(oe = 13) (oe = 13) (oe = 13) (oe = 3) (oe = 3) (oe = 3)
0 0 9.97eí01 1.00e+00 2.92eí03 9.56eí01 1.00e+00 4.43eí02
0 1 1.05e+00 1.05e+00 1.66eí03 2.26e+00 1.90e+00 3.60eí01
0 2 1.78e+00 1.58e+00 2.01eí01 6.27e+01 1.09e+01 5.18e+01
0 3 1.51e+02 6.88e+00 1.45e+02 9.38e+02 1.01e+02 8.37e+02
0 4 8.47e+03 5.98e+01 8.41e+03 9.92e+03 1.00e+03 8.92e+03
0 5 9.81e+04 5.89e+02 9.75e+04 9.99e+04 1.00e+04 8.99e+04
1 0 9.48e+00 9.50e+00 1.34eí02 5.72e+00 5.26e+00 4.57eí01
1 1 9.97e+00 1.00e+01 2.63eí02 9.87e+00 1.00e+01 1.26eí01
1 2 1.64e+01 1.50e+01 1.33e+00 8.11e+01 5.74e+01 2.37e+01
1 3 3.84e+02 6.53e+01 3.18e+02 9.59e+02 5.31e+02 4.28e+02
1 4 8.85e+03 5.68e+02 8.29e+03 9.94e+03 5.27e+03 4.67e+03
1 5 9.84e+04 5.60e+03 9.29e+04 9.99e+04 5.26e+04 4.73e+04
2 0 6.74e+01 6.32e+01 4.18e+00 1.69e+01 9.17e+00 7.68e+00
2 1 7.00e+01 6.65e+01 3.44e+00 2.31e+01 1.74e+01 5.63e+00
2 2 9.99e+01 1.00e+02 1.17eí01 1.00e+02 1.00e+02 3.75eí02
2 3 6.77e+02 4.35e+02 2.42e+02 9.79e+02 9.26e+02 5.38e+01
2 4 9.24e+03 3.78e+03 5.45e+03 9.96e+03 9.18e+03 7.76e+02
2 5 9.88e+04 3.72e+04 6.16e+04 9.99e+04 9.18e+04 8.19e+03
3 0 2.41e+02 1.45e+02 9.57e+01 3.19e+01 9.91e+00 2.20e+01
3 1 2.46e+02 1.53e+02 9.34e+01 3.91e+01 1.88e+01 2.03e+01
3 2 3.02e+02 2.30e+02 7.20e+01 1.19e+02 1.08e+02 1.11e+01
3 3 1.00e+03 1.00e+03 6.18eí02 1.00e+03 1.00e+03 4.33eí03
3 4 9.62e+03 8.70e+03 9.19e+02 9.98e+03 9.92e+03 5.96e+01
3 5 9.92e+04 8.57e+04 1.35e+04 1.00e+05 9.91e+04 8.42e+02
4 0 5.05e+02 1.67e+02 3.38e+02 4.88e+01 9.99e+00 3.88e+01
4 1 5.12e+02 1.76e+02 3.36e+02 5.65e+01 1.90e+01 3.75e+01
4 2 5.81e+02 2.65e+02 3.16e+02 1.38e+02 1.09e+02 2.96e+01
4 3 1.34e+03 1.15e+03 1.90e+02 1.02e+03 1.01e+03 1.24e+01
4 4 1.00e+04 1.00e+04 7.81eí03 1.00e+04 1.00e+04 0.00e+00
4 5 9.96e+04 9.85e+04 1.11e+03 1.00e+05 9.99e+04 6.02e+01
5 0 8.14e+02 1.70e+02 6.44e+02 6.67e+01 1.00e+01 5.67e+01
5 1 8.21e+02 1.79e+02 6.43e+02 7.47e+01 1.90e+01 5.57e+01
5 2 8.97e+02 2.69e+02 6.28e+02 1.58e+02 1.09e+02 4.90e+01
5 3 1.69e+03 1.17e+03 5.23e+02 1.04e+03 1.01e+03 3.22e+01
5 4 1.04e+04 1.02e+04 2.31e+02 1.00e+04 1.00e+04 1.26e+01
5 5 1.00e+05 1.00e+05 0.00e+00 1.00e+05 1.00e+05 0.00e+00
Parallel Implementation We have implemented the EM iteration (Eq. 3) on a DECmpp 12000Sx /
Model 200 singleíinstructionmultipleídata (SIMD) computer with a 64\Theta64 processor array. Ideally,
images would be mapped directly with one pixel per processor. Since, in these simulations, we
perform computation on images of dimension 256\Theta256 pixels, zero padded to 512\Theta512, 64 pixels
are stored per processor. Convolutions are computed with FFT algorithms tailored to take advantage
of the SIMD architecture. The additions, multiplications, and division in (3) are performed for 64 2
pixels in parallel. Both the Poisson and saddlepoint approximations (using Pad#e's approximation)
have been implemented. Each processor computes values of F for its pixels independently of the
other processors, which is ideal for SIMD architectures. For a 512 \Theta 512 image (the data must
be zeroípadded so that the point spread does not wrap around), we can compute 50 EMíPoisson
iterations in approximately 61 seconds and 50 EM saddlepoint iterations in approximately 81
seconds. The exact times for the EM saddlepoint iterations depends upon the number of Newton
iterations required, which depends upon the data. If machine precision or other factors cause ï to
be zero for some pixels, the denominator in Eq. (21) will be zero. In that case, we take the ratio
F=ï in the EM algorithm (3) to be zero.
3. Comparisons of Implementations
In this section, we describe the results of experiments that we have performed to examine the effect
on restored images of the Poisson and saddlepoint approximations to the function F [r; ï; oe].

Compensation for ReadíOut Noise 147
Figure 1. Simulated star cluster (left) and simulated HST image of star cluster (right).
3.1. Experiment 1: Restoration of Star Cluster Data Using 50 EM Iterations
We performed restorations for the simulated star cluster (île #truth.ît#) provided by the Space
Telescope Science Institute. The star cluster, shown in the left panel of Fig. 1, consists of a îeld
of unresolved stars at random locations and having randomly selected brightnesses. We simulated
an HST image of the star cluster using a spaceíinvariant PSF (île #mpsf12.ît" provided by ST ScI)
representative of the HSTs PSF, Poissonídistributed photoícounts, and a readíout noise level of
oe = 13 electrons r.m.s. This is shown in the right panel of Fig. 1. In these and the following images,
gray scale values are shown in a logarithmic scale so that low level noise and distortions can be
seen.
The restorations obtained from the simulated data by performing 50 EM iterations (Eq. 3)
based on the Poisson (Eq. 11) and saddlepoint (Eq. 23) approximations are shown in Fig. 2. It is
difîcult to compare these two restorations directly, so we have summarized one feature of them
in Fig. 3 where shown are the ratios of estimated to true brightness versus true brightness for the
restorations in Fig. 2. These scatter diagrams were obtained by placing a point at location (tb, eb/tb)
for each star in the star cluster for restorations based on the Poisson (left) and saddlepoint (right)
approximations. The solid lines were obtained by averaging the logarithmic values in the scatter
diagrams with a rectangular window of width 0.1 moved in steps of 0.05.
The two solid lines in Fig. 3 are redrawn together in Fig. 4 so they can be compared more readily.
It is evident from this graph that, for 50 EM iterations, the saddlepoint approximation tends to yield
more accurate brightness estimates, especially for fainter stars. The difference does not appear to
be enormous, but it is present, with the saddlepoint approximation yielding brightness estimates
closer to truth than with the Poisson approximation. Recognizing that these are logarithmic plots,
there is seen to be an improvement in the accuracy of the brightness estimate by a factor of about
1.5 for fainter objects, which can be signiîcant when accurate photometry is sought.
3.2. Experiment 2: Averages for 25 Repeated Star Cluster Simulations, 50 Iterations Each
For this experiment, we simulated25 independent realizations of HST images that would be acquired
by the HST for the star cluster îeld. These images were produced in the same manner as the image
in the rightíhand panel of Fig. 1. We then restored each of the 25 images using 50 EM iterations for
both the Poisson and saddlepoint approximations. Shown in Fig. 5 is the average ratio of estimated
brightness to brightness versus brightness for the star cluster data; this graph was obtained by
averaging the 25 versions of Fig. 4 obtained in the multiple simulations.

148 Snyder et al.
Figure 2. Restorations of simulated HST star cluster data using the Poisson (left) and
saddlepoint (right) approximations and 50 EM iterations.
10 3 10 10
10 í2
10 í1
10 0
10 1
true brightness
estimated
brightness
/
true
brightness
Poisson approximation (50 EM iterations)
10 10 10
10 í2
10 í1
10 0
10 1
true brightness
estimated
brightness
/
true
brightness
Saddleípoint approximation (50 EM iterations)
Figure 3. Ratio of estimated to true brightness vs. true brightness for simulated star
cluster data. Obtained using 50 EM iterations and the Poisson (left) and saddlepoint
(right) approximations.

Compensation for ReadíOut Noise 149
10 3 10 4 10 5
10 í1
10 0
true brightness
estimated
brightness
/
true
brightness
Figure 4. Smoothed ratio of estimated to true brightness vs. true brightness for star
cluster data. Obtained from one restoration of a simulated HST star cluster image
using 50 EM iterations and the Poisson (dashed line) and saddlepoint (continuous line)
approximations.
10 3 10 4 10 5
10 í1
10 0
true brightness
estimated
brightness
/
true
brightness
Figure 5. Smoothed ratio of estimated to true brightness vs. true brightness for star
cluster data. Obtained by averaging 25 restorations of simulated HST star cluster images
using 50 EM iterations each and the Poisson (dashed line) and saddlepoint (continuous
line) approximations.

150 Snyder et al.
10 3
10 10
í8
í6
í4
í2
0
2
true brightness
(mean
of
brightness
error)
/
true
brightness
Poisson approximation (50 EM iterations)
10 10 10
í8
í6
í4
í2
0
2
true brightness
(mean
of
brightness
error)
/
true
brightness
Saddleípoint approximation (50 EM iterations)
Figure 6. Relative bias in estimating brightness with the Poisson and saddlepoint
approximations for a simulated star cluster îeld. Obtained by averaging errors in 25
simulations with restorations performed using 50 EM iterations each.
10 3 10 10
0
2
4
6
8
10
12
14
16
true brightness
(standard
deviation
of
brightness
error)
/
true
brightness
Poisson approximation (50 EM iterations)
10 10 10
0
2
4
6
8
10
12
14
16
true brightness
(standard
deviation
of
brightness
error)
/
true
brightness
Saddleípoint approximation (50 EM iterations)
Figure 7. Relative standard deviation of error in estimating brightness with the Poisson
and saddlepoint approximations for a simulated star cluster îeld. Obtained by averaging
errors in 25 simulations with restorations performed using 50 EM iterations each.
Shown in Fig. 6 is the ratio of the mean error to brightness versus brightness for the two
approximations. These scatter plots were obtained by averaging the error (estimated brightness
minus true brightness) across the 25 restorations, dividing by the true brightness, and creating a
point at (tb, error/tb) for each star in the star cluster. Shown in Fig. 7 is a corresponding scatter plot
showing the ratio of standard deviation to brightness versus brightness. The solid lines in Figs. 6
and 7 were obtained by averaging the values in the scatter diagrams with a rectangular window of
width 0.2 moved in steps of 0.01. These are replotted in Fig. 8 to facilitate comparisons.
3.3. Experiment 3: Restoration of Star Cluster Data Using 500 EM Iterations
Shown in Fig. 9 are the results for the same conditions as for Fig. 3 but with 500 EM iterations
having been performed. When these extended numbers of iterations are performed, we înd less
scatter for fainter stars compared to restorations performed with 50 EM iterations, and there is less
bias for all brightness levels. We also înd at 500 EM iterations that the Poisson and saddlepoint
approximations yield virtually identical results so far as these scatter diagrams are concerned.
However, these scatter diagrams only re#ect the restored images at the true star locations. Restored
images of the star cluster for 500 iterations are in Fig. 10, which is the result of performing 500
EM iterations on one simulated HST star cluster image. These images are shown on a logarithmic
scale, so low level noise is also evident, but this is noise is small, typically at a level of about 50
compared to stars that are at a level several orders of magnitude larger (about 10 4 ). The low level

Compensation for ReadíOut Noise 151
10 3 10 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
true brightness
(mean
of
brightness
error)
/
true
brightness
10 10 10
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
true brightness
(standard
deviation
of
brightness
error)
/
true
brightness
Figure 8. Mean and standard deviation of error for the saddlepoint (solid line) and
Poisson (dashed line) approximations. Obtained by averaging restorations of 25 simulated
HST images of the star cluster, each restored with 50 EM iterations.
10 3 10 10
10 í2
10 í1
10 0
10 1
true brightness
estimated
brightness
/
true
brightness
Poisson approximation (500 EM iterations)
10 10 10
10 í2
10 í1
10 0
10 1
true brightness
estimated
brightness
/
true
brightness
Saddleípoint approximation (500 EM iterations)
Figure 9. Ratio of estimated to true brightness vs. true brightness for simulated star
cluster data. Obtained using 500 EM iterations and the Poisson (left) and saddlepoint
(right) approximations.
noise is reduced substantially in Fig. 11, which is the average of 25 restorations of HST star cluster
images, each being the result of performing 500 EM iterations. The average results shown for the
Poisson and saddlepoint approximations are virtually identical at 500 EM iterations and low level
noise is greatly reduced.
4. Conclusions
Use of the Poisson approximation of the function F (r; ï; oe) that accounts for readíout noise does
not introduce signiîcant errors in the restoration of HST images. However, some improvement in
photometric performance can be obtained at the expense of computation time by a more accurate
evaluation of this function. The improvement is seen most for faint objects when 50 iterations
of the expectationímaximization algorithm are used to perform the restorations. For star cluster
îelds, performing more iterations of the expectationímaximization algorithm yields signiîcantly
improved restorations, and the Poisson approximation is not improved noticeably by a more accurate
approximation of the function.

152 Snyder et al.
Figure 10. Restorations of one simulated HST star cluster image obtained using 500
EM iterations and the Poisson (left) and saddlepoint (right) approximations.
Figure 11. Average of restorations of 25 simulated HST star cluster images obtained
using 500 EM iterations each and the Poisson (left) and saddlepoint (right) approximations.

Compensation for ReadíOut Noise 153
Appendix: C Subroutine for Saddlepoint Method
#include
#include
#define EPSILON 0.000001
/* Number of Picard iterations to use */
#define NUM_PICARD 3
float the_function();
double sp_approx(), newton_iteration(), pade(), picard();
/* As listed below, the_function uses the Pade approximation. To use */
/* the Picard iteration, change the "pade" to "picard" in the code */
/* for the_function */
/* Function: the_function */
/* Evaluates F given v, mu, and sigmasquared, where v = r í m */
/* Returns evaluation of F */
/* Remember the normalizing 2*PI factor has been left out */
float the_function(v, mu, sigmasquared)
float v, mu, sigmasquared;
{
double log_q0, log_q1, saddlepoint;
float v_minus_1;
v_minus_1 = v í 1.0;
saddlepoint = pade(v,mu,sigmasquared);
saddlepoint = newton_iteration(v,mu,sigmasquared,saddlepoint);
log_q0 = sp_approx(v,mu,sigmasquared,saddlepoint);
saddlepoint = newton_iteration(v_minus_1,mu,sigmasquared,saddlepoint);
log_q1 = sp_approx(v_minus_1,mu,sigmasquared,saddlepoint);
return((float) (mu * exp(log_q1 í log_q0)));
} /* end the_function */
/* Function: sp_approx */
/* Returns the saddlepoint approximation to the the log of */
/* p(x,mu,sigmasquared) given the saddlepoint found by the */
/* Newton iteration. Remember 2*PI factor has been left out. */
double sp_approx(x,mu,sigmasquared,saddlepoint)
float x, mu, sigmasquared;
double saddlepoint;
{
double mu_exp_minus_s, phi2;
mu_exp_minus_s = mu * exp (ísaddlepoint);
phi2 = sigmasquared + mu_exp_minus_s;
return(ímu + (saddlepoint * (x + 0.5 * sigmasquared * saddlepoint))
+ mu_exp_minus_s í 0.5 * log(phi2));
} /* end sp_approx */
/* Function: newton_iteration */
/* Returns the saddlepoint found by Newton iteration for a given */
/* x, mu, sigmasquared and an initial esimate of the saddlepoint */
/* (found with either the Pade or Picard approaches */
double newton_iteration(x, mu, sigmasquared, initial_saddlepoint)
float x, mu, sigmasquared;
double initial_saddlepoint;
{
double mu_exp_minus_s, saddlepoint, change;
saddlepoint = initial_saddlepoint;
do {
mu_exp_minus_s = mu * exp(ísaddlepoint);
change = (x + sigmasquared * saddlepoint í mu_exp_minus_s)
/ (sigmasquared + mu_exp_minus_s);
saddlepoint í= change;
} while((fabs(change) > EPSILON * fabs(saddlepoint)));
return(saddlepoint);
} /* end newton_iteration */

154 Snyder et al.
/* Function: pade */
/* Returns the initial saddlepoint estimated by the Pade approx. */
double pade(x, mu, sigmasquared)
float x, mu, sigmasquared;
{
double bterm;
bterm = x í 2*sigmasquared í mu;
return(ílog(0.5 * (bterm
+ sqrt(bterm*bterm + 4*mu*(2*sigmasquared + x))) / mu));
} /* end pade */
/* Function: picard */
/* Returns the initial saddlepoint estimated by the Picard iter. */
double picard(x, mu, sigmasquared)
float x, mu, sigmasquared;
{
int i;
double argument_to_log, saddlepoint, taylor;
/* Use Taylor approx. to get starting point for Picard iteration. */
saddlepoint = taylor = (mu í x) / (mu + sigmasquared);
for (i = 0; i < NUM_PICARD; i++)
{
argument_to_log = mu / (x + sigmasquared * saddlepoint);
if (argument_to_log <= 0.0) /* Break out of loop if */
return(taylor); /* arg. to log goes neg.*/
else saddlepoint = log(argument_to_log);
}
return(saddlepoint);
} /* end picard */
Acknowledgments. This work was supported in part by the National Science Foundation under
grant number MIP‘9101991.
References
Feller, W. 1968, Introduction to Probability Theory and Its Applications, Wiley, New York, pp. 190,
245
Helstrom, C. 1978, IEEE Trans. Aero. Elec. Sys., AES‘14, 630
Llacer, J. & N#u#nez 1990, in The Restoration of Hubble Space Telescope Images, R. L. White & R.
J. Allen, eds., Space Telescope Science Institute, Baltimore, 62
Lucy, L. 1974, AJ, 79, 745
Newton, G. C., Gould, L. A., & Kaiser, J. F. 1961, Analytical Design of Linear Feedback Controls,
Wiley, New York, 239
Politte, D. G. & Snyder, D. L. 1991, IEEE Trans. Med. Imaging, 10, 82
Richardson, W. H. 1972, J. Opt. Soc. Am. A, 62, 55
Shepp, L. A. & Vardi, Y. 1982, IEEE Trans. Med. Imaging, MI‘1, 113
Snyder, D. L., Hammoud, A. M., & White, R. L. 1993, J. Opt. Soc. Am. A, 10, 1014