Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.adass.org/adass/proceedings/adass03/O2-1/
Дата изменения: Tue Aug 17 02:09:14 2004
Дата индексирования: Tue Oct 2 04:14:49 2012
Кодировка:

Поисковые слова: titan
ADASS 2003 Conference Proceedings Next: Scale sensitive deconvolution
Up: Image Restoration
Previous: Image Restoration
Table of Contents - Subject Index - Author Index - Search - PS reprint - PDF reprint

Willett, R. M., Jermyn, I., Nowak, R. D., & Zerubia, J. 2003, in ASP Conf. Ser., Vol. 314 Astronomical Data Analysis Software and Systems XIII, eds. F. Ochsenbein, M. Allen, & D. Egret (San Francisco: ASP), 107

Wavelet-Based Superresolution in Astronomy

Rebecca Willett, Robert Nowak
Rice University Department of Electrical and Computer Engineering, Houston, TX and University of Wisconsin Department of Electrical and Computer Engineering, Madison, WI

Ian Jermyn, Josiane Zerubia
INRIA, Project Ariana, Sophia-Antipolis, France

Abstract:

High-resolution astronomical images can be reconstructed from several blurred and noisy low-resolution images using a computational process known as superresolution reconstruction. Superresolution reconstruction is closely related to image deconvolution, except that the low-resolution images are not registered and their relative translations and rotations must be estimated in the process. The novelty of our approach to the superresolution problem is the use of wavelets and related multiresolution methods within an expectation-maximization reconstruction process to improve the accuracy and visual quality of the reconstructed image. Simulations demonstrate the effectiveness of the proposed method, including its ability to distinguish between tightly grouped stars with a small set of observations.

1. Introduction

The physical resolution of astronomical imaging devices such as space telescopes is limited by system parameters such as lens aperture and CCD array properties and by physical effects such as the atmosphere and the interstellar/intergalactic medium. However, such systems typically do not take a single snapshot of a celestial body, but rather collect a series of images. Due to mechanical vibrations of the instrument and movement of the satellite relative to the body being images, the images collected are all slightly different observations of the same scene. Superresolution image reconstruction refers to the process of reconstructing a new image with a higher resolution using this collection of low resolution, shifted, rotated, and often noisy observations. This allows users to see image detail and structures which are difficult if not impossible to detect in the raw data.

Superresolution is a useful technique in a variety of applications (Schultz & Stevenson, 1996; Hardie, Barnard, & Armstrong, 1997), and recently, researchers have begun to investigate the use of wavelets for superresolution image reconstruction (Nguyen, Milanfar, & Golub, 2001). We present a new method for superresolution image reconstruction based on the wavelet transform in the presence of Gaussian noise and on an analogous multiscale approach in the presence of Poisson noise. To construct the superresolution image, we use an approach based on maximum penalized likelihood estimation. The reconstructed image is the argument (in our case the superresolution image) that maximizes the sum of a log-likelihood function and a penalizing function. The penalizing function can be specified by an ad hoc smoothness measure, a Bayesian prior distribution for the image (Hebert & Leahy, 1989; Green, 1990), or a complexity measure (Liu & Moulin, 2001). Smoothness measures include simple quadratic functions that measure the similarity between the intensity values of neighboring pixels, as well as non-quadratic measures that better preserve edges. Similar penalty functions result from Markov Random Field (MRF) priors. Complexity measures are usually associated with an expansion of the intensity image with respect to a set of basis functions (e.g. Fourier or wavelet) and count the number of terms retained in a truncated or pruned series (Saito, 1994; Krim & Schick, 1999); the more terms (basis functions) used to represent the image, the higher the complexity measure. Many algorithms (e.g. Expectation-Maximization algorithms, the Richardson Lucy algorithm, or close relatives) have been developed to compute MPLEs under various observation models and penalization schemes.

Wavelets and multiresolution analysis are especially well-suited for astronomical image processing because they are adept at providing accurate, sparse representations of images consisting of smooth regions with isolated abrupt changes or singularities ( e.g. stars against a dark sky). Many investigators have considered the use of wavelet representations for image denoising, deblurring, and image reconstruction; for examples, see Mallat, 1998, and Starck, Murtagh, & Bijaoui, 1998. The proposed approach uses an EM algorithm for superresolution image reconstruction based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with low-complexity, expressed in terms of the wavelet coefficients, taking advantage of the well known sparsity of wavelet representations.

The EM algorithm proposed here extends the work of Nowak & Kolaczyk, 2000, and Figueiredo & Nowak, 2002, which addressed image deconvolution with a method that combines the efficient image representation offered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. The algorithm alternates between an E-step based on the fast Fourier transform (FFT) and a DWT-based M-step, resulting in an efficient iterative process requiring $O(N \log N)$ operations per iteration, where $N$ is the number of pixels in the superresolution image.

2. Problem Formulation

In the proposed method, each observation, $y_{k}$, is modeled as a shifted, rotated, blurred, downsampled, and noisy version of the superresolution ${\ensuremath{\bm{x}}\xspace}$. The shift and rotation is caused by the movement of the instrument, and the blur is caused by the point spread function (PSF) of the instrument optics and the integration done by the CCD array. The downsampling (subsampling) operator models the change in resolution between the observations and the desired superresolution image. If the noise can be modeled as additive white Gaussian noise, then we have the observation model

\begin{displaymath}y_{k} = DBS_{k}{\ensuremath{\bm{x}}\xspace}+ n_{k}, \;\; k = 1, \ldots, n\end{displaymath}

where $D$ is the downsampling operator, $B$ is the blurring operator, $S_{k}$ is the shift and rotation operator for the $k^{th}$ observation, and $n_{k}$ is additive white Gaussian noise with variance $\sigma^{2}$. By collecting the series of observations into one array, ${\ensuremath{\bm{y}}\xspace}$, the noise observations into another array, ${\ensuremath{\bm{n}}\xspace}$, and letting $H$ be a block diagonal matrix composed of the $n$ matrixes $DBS_{k}$ for $k = 1, \ldots, n$, then we have the model
\begin{displaymath}
{\ensuremath{\bm{y}}\xspace}= H{\ensuremath{\bm{x}}\xspace}+{\ensuremath{\bm{n}}\xspace}.
\end{displaymath} (1)

Similarly, if the noise is better modeled a Poisson, then we have the model
\begin{displaymath}
{\ensuremath{\bm{y}}\xspace}\sim \mbox{Poisson}(H{\ensuremath{\bm{x}}\xspace}).
\end{displaymath} (2)

From the formulation above, it is clear that superresolution image reconstruction is a type of inverse problem in which the operator to be inverted, $H$, is partially unknown due to the unknown shifts and rotations of the observations. The first step of our approach is to estimate these parameters by registering the low-resolution observations to one another. Using these estimates, we reconstruct an initial superresolution image estimate ${\ensuremath{\bm{\widehat{x}}}\xspace}$. This estimate is used in the third step, where we re-estimate the shift and rotation parameters by registering each of the low resolution observations to the initial superresolution estimate. Finally, we use a wavelet-based EM algorithm to solve for ${\ensuremath{\bm{\widehat{x}}}\xspace}$ using the registration parameter estimates. We begin by describing a wavelet-based method for the Gaussian noise model, and follow that by a discussion of a multiscale technique for Poisson data. Each of these steps is detailed below.

3. Registration of the Observations

The first step in the proposed method is to register the observed low-resolution images to one another using a Taylor series expansion. This was proposed by Irani and Peleg, 1991. In particular, let $f_{1}$ and $f_{2}$ be the continuous images underlying the sampled images $y_{1}$ and $y_{2}$, respectively. If $f_{2}$ is equal to a shifted, rotated version of $f_{1}$, then we have the relation

\begin{displaymath}f_{2}(t_{x},t_{y}) = f_{1}(t_{x}\cos r - t_{y} \sin r + s_{x}, t_{y}\cos r + t_{x}\sin r + s_{y}).\end{displaymath}

where $(s_{x}, s_{y})$ is the shift and $r$ is the rotation. A first order Taylor series approximation of $f_{2}$ is then

\begin{displaymath}\widehat{f}_{2}(t_{x},t_{y}) = f_{1}(t_{x},t_{y}) + \left( s_...
... - t_{x}r^{2}/2 \right) \frac{\partial f_{1}}{\partial t_{y}}. \end{displaymath}

Let $\widehat{y}_{2}$ be a sampled version of $\widehat{f}_{2}$; then $y_{1}$ and $y_{2}$ can be registered by finding the $s_{x}$, $s_{y}$, and $r$ which minimize $\Vert y_{1}-\widehat{y}_{2}\Vert^{2}_{2}$, where $\Vert x\Vert^{2}_{2} = \sum_{i} x_{i}^{2}$. This minimization is calculated with an iterative procedure which ensures that the motion being estimated at each iteration is small enough for the Taylor series approximation to be accurate; see (Irani & Peleg, 1991) for details. This method was applied to the earth image data in Figure 2. The sixteen true shifts and rotations are displayed in black in Figure 1, and the results of this registration method are displayed in hollow circles and bars.

Figure 1: Image registration results. (a) True shifts (black), initial shift estimates (hollow), and final shift estimates (gray). (b) True rotations (black), initial rotation estimates (hollow), and final rotation estimates (gray).
 
(a)   (b)

After the registration parameters have been initially estimated using the above method, we use these estimates to calculate an initial superresolution image as described in Section 4.. This initial image estimate is then used to refine the registration parameter estimates. The method is the same as above, but instead of registering a low resolution estimate, $y_{2}$, to another low resolution estimate, $y_{1}$, we instead register it to $DBS_{1}{\ensuremath{\bm{\widehat{x}}}\xspace}$. The results of this refinement are displayed in gray in Figure 1. From these plots, it is clear that the Taylor series based approach can produce highly accurate results. However, in low SNR scenarios, where confidence in registration parameter estimates may be low, the estimates can be further refined at each iteration of the proposed EM algorithm, as discussed in the following sections.

Note that the motion model considered here encompasses only shift and rotation movement. When recording still or relatively still objects distant from the imaging device, this model is sufficient. More sophisticated models are an area of open research.


4. Multiscale Expectation-Maximization

Maximization is facilitated within the EM framework through the introduction of a particular ``unobservable'' or ``missing'' data space. The key idea in the EM algorithm is that the indirect (inverse) problem can be broken into two subproblems; one which involves computing the expectation of unobservable data (as though no blurring or downsampling took place) and one which entails estimating the underlying image from this expectation. By carefully defining the unobservable data for the superresolution problem, we derive an EM algorithm which consists of linear filtering in the E-step and image denoising in the M-step.

The Gaussian observation model in (1) can be written with respect to the DWT coefficients ${\ensuremath{\bm{\theta}}\xspace}$, where ${\ensuremath{\bm{x}}\xspace}= W{\ensuremath{\bm{\theta}}\xspace}$ and $W$ denotes the inverse DWT operator (Mallat, 1998):

\begin{displaymath}{\ensuremath{\bm{y}}\xspace}= H W {\ensuremath{\bm{\theta}}\xspace}+ {\ensuremath{\bm{n}}\xspace}.\end{displaymath}

Clearly, if we had ${\ensuremath{\bm{y}}\xspace}= W{\ensuremath{\bm{\theta}}\xspace}+ {\ensuremath{\bm{n}}\xspace}$ ( i.e. if no subsampling or blurring had occurred), we would have a pure image denoising problem with white noise, for which wavelet-based denoising techniques are very fast and nearly optimal (Mallat, 1998). Next note that the noise in the observation model can be decomposed into two different Gaussian noises (one of which is non-white):

\begin{displaymath}{\ensuremath{\bm{n}}\xspace}= \alpha H {\ensuremath{\bm{n}}\xspace}_{1}
+ {\ensuremath{\bm{n}}\xspace}_{2}\end{displaymath}

where $\alpha$ is a positive parameter, and ${\ensuremath{\bm{n}}\xspace}_{1}$ and ${\ensuremath{\bm{n}}\xspace}_{2}$ are independent zero-mean Gaussian noises with covariances $\Sigma_{1} = I$ and $\Sigma_{2} = \sigma^{2}I-\alpha^{2}HH^{T}$, respectively. Using ${\ensuremath{\bm{n}}\xspace}_{1}$ and ${\ensuremath{\bm{n}}\xspace}_{2}$, we can rewrite the Gaussian observation model as

\begin{displaymath}{\ensuremath{\bm{y}}\xspace}=
H\underbrace{(W {\ensuremath{\b...
...\ensuremath{\bm{z}}\xspace}} +{\ensuremath{\bm{n}}\xspace}_{2}.\end{displaymath}

This observation is the key to our approach since it suggests treating ${\ensuremath{\bm{z}}\xspace}$ as missing data and using the EM algorithm to estimate ${\ensuremath{\bm{\theta}}\xspace}$.

An analogous formulation is possible for the Poisson noise model. In this case, photon projections can be described statistically as follows. Photons are emitted (from the emission space) according to a high resolution intensity ${\ensuremath{\bm{x}}\xspace}$. Those photons emitted from location $m$ are detected (in the detection space) at position $n$ with transition probability $H_{m,n}$, where $H$ is the superresolution operator from (2). In such cases, the measured data are distributed according to

\begin{displaymath}
{\ensuremath{\bm{y}}\xspace}_{n} \sim \mbox{Poisson}\left( \sum_{m} H_{m,n} x_{m}\right).
\end{displaymath} (3)

In this formulation of the EM algorithm, the missing data is defined as ${\ensuremath{\bm{z}}\xspace}=
\{z_{m,n}\}$, where $z_{m,n}$ denotes the number of photons emitted from $m$ and detected at $n$. The complete data are Poisson distributed according to

\begin{displaymath}z_{m,n} \sim \mbox{Poisson}(H_{m,n} {\ensuremath{\bm{x}}\xspace}_{m} ).\end{displaymath}

Hence the observed data ${\ensuremath{\bm{y}}\xspace}$ in (3) are given by ${\ensuremath{\bm{y}}\xspace}_{n} = \sum_{m} z_{m,n}$. Additionally, were we able to observe ${\ensuremath{\bm{z}}\xspace}=
\{z_{m,n}\}$, the direct emission data for each location $m$ is given by sums of the form $\sum_{n} z_{m,n} \sim
\mbox{Poisson}({\ensuremath{\bm{x}}\xspace}_{m})$. Therefore, if ${\ensuremath{\bm{z}}\xspace}$ were known, we could avoid the inverse problem altogether and simply deal with the issue of estimating a Poisson intensity given direct observations.

From these formulations of the problem for Gaussian and Poisson data, the EM algorithm produces a sequence of estimates $\{ {\ensuremath{\bm{x}}\xspace}^{(t)}, t = 0, 1, 2, \ldots
\}$ by alternately applying two steps:

E-Step:
Updates the estimate of the missing data using the relation:

\begin{displaymath}\widehat{{\ensuremath{\bm{z}}\xspace}}^{(t)} = E \left[{\ensu...
...ace}, \widehat{{\ensuremath{\bm{\theta}}\xspace}}^{(t)}\right].\end{displaymath}

In the Gaussian case this can be reduced to a Landweber iteration (Landweber, 1951):