Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.stsci.edu/stsci/meetings/irw/proceedings/cornwellt.dir/section3_3.html
Дата изменения: Mon Jun 6 17:48:38 1994
Дата индексирования: Sun Dec 23 20:28:31 2007
Кодировка:
Deconvolution



Next: Tricky stuff Up: Where Have We Been Previous: Some Comments from

Deconvolution

As the risk of offending by over-simplifying, the deconvolution algorithms presented here can be classified in the following way:

Classic inverse filter.
The inverse filter doesn't work, of course, but one can sneak up on it by using various iterative schemes that eventually converge (or, more appropriately, diverge) to the inverse filter. Sneaking up is not quite the right phrase; perhaps knowing when to slam on the brakes expresses the danger of this approach more accurately. One subtle point is that if the cue to hit the brakes comes from the data, then the method is no longer linear and hence loses most of its attraction (Coggins).
Richardson-Lucy algorithm.
RL is known to be a Maximum Likelihood estimator for photon noise. What else do we know about it? Well, it also needs the brakes slammed on at the right moment (White). Also, we don't actually understand the nature of the extra information used by RL to generate a unique solution. This is very similar to the situation with the CLEAN algorithm used in radio interferometry. CLEAN is quite mysterious. It seems to work well, but no one can describe the properties of the solution generated by CLEAN. The same is true of RL. Both RL and CLEAN solve Maximum-Likelihood equations via an iterative scheme that must impose some bias towards a particular type of solution. The problem is that we cannot quantify that bias or what the preferred type of solution is.
Regularized deconvolution.
It's been my experience that regularization is loved by mathematicians and Bayesian methods are loved by physicists. This split is somewhat meaningless since the equations to be solved are often much the same and therefore the results are often similar.
Bayesian methods.
There were two interesting developments in Bayesian methods presented at this meeting. First, we saw the marriage of Bayesian priors with photon noise to yield an improved RL algorithm (thus addressing my objection to RL above) (Molina). Second, we heard about the Pixon (Puetter). The idea behind the Pixon is quite simple and beautiful: it says that since there is nothing magical about the basis function used for expressing an image - one should allow it to be dictated by the data. This has the self-evident quality of many smart ideas. However, I think we need to know more about how exactly a Pixon basis is to be chosen. This seems to be another place where heuristics can creep in, which is ironic if one considers a goal of Bayesian methods to be the elimination of ad hoc tricks.
Blind deconvolution.
I think that, for me, one of the most interesting and satisfying topics at this conference was blind deconvolution: recovery of the PSF function as well as an image of the sky (Christou, Schulz). This is personally satisfying because I have worked on more or less the same problem in radio interferometry. There we know some constraints on the PSF (mainly the known spatial frequency content and also that the closure relations have to be obeyed) and we can add sufficient prior information about the sky (e.g., positivity and emptiness) to allow recovery of both PSF and sky brightness. In the application to the HST, blind deconvolution allows recovery of the PSF in cases where the calculated PSF is insufficiently accurate. There remain many issues to be settled, though:



Next: Tricky stuff Up: Where Have We Been Previous: Some Comments from


rlw@stsci.edu