Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.stsci.edu/stsci/meetings/irw/proceedings/briggsd.dir/section3_2.html
Дата изменения: Tue Jun 7 00:11:41 1994
Дата индексирования: Sun Dec 23 19:48:30 2007
Кодировка:

Поисковые слова: rho ophiuchi
Deconvolution



Next: Modelling Up: Superresolution of SN 1987A: Previous: Introduction

Deconvolution

The dirty maps were made with uniform weighting. The slight increase in resolution from superuniform weighting was not deemed important enough to offset the increase in noise and vulnerability to striping. The cell size 0."1 was selected as adequate to oversample the minimum effective resolution, and most maps were of size . Iterative algorithms were started with a flat default, as more sophisticated choices of starting point made little difference. The final maps were verified to be insensitive to the precise values of these and other parameters.

The deconvolution algorithms described below were used for the initial mapping. Some algorithms diverged unless a tight support constraint was used to confine the emission to the immediate region of the supernova. The more robust algorithms were used to verify that this was a reasonable assumption. All algorithms appeared to work better with tight support constraints, although occasionally artifacts were produced at the edge of the box. All algorithms produced similar results at the diffraction limit.

All of these algorithms operate in the image plane, and all use a model of the sky which is measured in Jy/pixel. This model is then convolved with a restoring beam representing the desired final resolution, and the residuals () are added back to produce the final image normally interpreted by the astronomer. For the purposes of the superresolution study, it is often more productive to examine the sky model directly.

Maximum Entropy

(Cornwell &Evans algorithm.) This algorithm produced what appears to be the best reconstruction found. It agrees with the other algorithms on the major features, produces a plausible physical image, performs well on simulated data with a high degree of superresolution, and is robust to support constraints. For this project it is the algorithm of choice.

CLEAN

Both the original Högbom algorithm and a positivity constrained version was used. While quite acceptable at the diffraction limit, CLEAN is not recommended for superresolution purposes.

Maximum Emptiness

This algorithm behaved similarly to MEM, but the emptiness constraint appears somewhat less powerful than the entropy term of MEM. It converges slowly and the degree of superresolution is less than with MEM.

Gerchberg-Saxon-Papoulis

GSP produced a final result very similar to MEM, but diverged without support constraint.

Richardson-Lucy

This algorithm was comparable to Maximum Emptiness in that it produced a physically reasonable model but at a lower resolution than MEM.

Non-Negative Least Squares

Direct fit of pixel values to the data is subject to the singular nature of the dirty beam. Constraining the pixel values to be non-negative did regularize the problem, but still produced a very non-physical model.

Direct Algebraic Inversion

Direct inversion of the deconvolution equation can also be regularized with a Singular Value decomposition of the beam matrix. With heuristic selection of an appropriate cutoff in singular value a model was obtained that resembled the MEM model, but the model still contained negative flux at an unacceptably high level.

There are a number of consistency checks that can be performed on the observational data itself:



Next: Modelling Up: Superresolution of SN 1987A: Previous: Introduction


rlw@stsci.edu