Next: Modelling
Up: Superresolution of SN 1987A:
Previous: Introduction
The dirty maps were made with uniform weighting. The slight increase in
resolution from superuniform weighting was not deemed important enough to
offset the increase in noise and vulnerability to striping. The cell size
0."1 was selected as adequate to oversample the minimum effective
resolution, and most maps were of size . Iterative algorithms were
started with a flat default, as more sophisticated choices of starting
point made little difference. The final maps were verified to be
insensitive to the precise values of these and other parameters.
The deconvolution algorithms described below were used for the initial mapping.
Some algorithms diverged unless a tight support constraint was used to
confine the emission to the immediate region of the supernova. The more
robust algorithms were used to verify that this was a reasonable assumption.
All algorithms appeared to work better with tight support constraints, although
occasionally artifacts were produced at the edge of the box. All algorithms
produced similar results at the diffraction limit.
All of these algorithms operate in the image plane, and all use a model of
the sky which is measured in Jy/pixel. This model is then convolved with
a restoring beam representing the desired final resolution, and the
residuals () are added
back to produce the final image normally interpreted by the astronomer.
For the purposes of the superresolution study, it is often more productive to
examine the sky model directly.
(Cornwell &Evans algorithm.) This algorithm
produced what appears to be the best reconstruction found. It agrees
with the other algorithms on the major features, produces a plausible
physical image, performs well on simulated data with a high degree of
superresolution, and is robust to support constraints. For this project
it is the algorithm of choice.
Both the original Högbom algorithm and a positivity
constrained version was used. While quite acceptable at the diffraction
limit, CLEAN is not recommended for superresolution purposes.
This algorithm behaved similarly to MEM,
but the
emptiness constraint appears somewhat less powerful than the entropy term
of MEM. It converges slowly and the degree of superresolution is less than
with MEM.
GSP produced a final result very
similar
to MEM, but diverged without support constraint.
This algorithm was comparable to Maximum
Emptiness in that it produced a physically reasonable model but at a lower
resolution than
MEM.
Direct fit of pixel values to the
data
is subject to the singular nature of the dirty beam. Constraining the pixel
values to be non-negative did regularize the problem, but still produced
a very non-physical model.
Direct inversion of the
deconvolution
equation can also be regularized with a Singular Value decomposition of
the beam matrix. With heuristic selection of an appropriate cutoff in
singular value a model was obtained that resembled the MEM model, but the
model still contained negative flux at an unacceptably high level.
There are a number of consistency checks that can be performed on the
observational data itself:
-
The result must not depend grossly on minor perturbations of incidental
imaging parameters. In particular, check the image as a function of cell
size over reasonable ranges of oversampling. The result must not depend too
sharply (or oscillate!) in iteration, though many algorithms have a long
slow convergent tail. If the algorithm converges, the result must not
depend too strongly on the details of the support constraints.
-
Cross validate different subsets of the data. In this case, different
epochs were imaged separately and compared. Also, random
partitions of the composite data set were compared, which smoothed the
calibration differences between epochs.
-
Sum the deconvolutions from different subsets of the data. If the sum is
similar to the global deconvolution of the combined data, algorithm
non-linearity isn't the driving force of the deconvolution and the solution
is presumably more robust than otherwise.
-
Understand the noise insofar as is possible. In this case, there is a problem
with confusion as diagnosed by rejecting short spacing visibilities and
measuring the change in off source RMS noise. Similarly, median filtering
of the data did not degrade the RMS noise by the factor expected if the
noise was completely thermal. Fortunately, this confusion noise appeared to
be benign for our purposes.
-
Compare the results carefully with other images at other wavelengths. Here
we present our image superimposed on the [OIII] image of Jakobsen
et al. 1991.
Next: Modelling
Up: Superresolution of SN 1987A:
Previous: Introduction