Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.stsci.edu/stsci/meetings/irw/proceedings/wun.dir/section3_2.html
Дата изменения: Mon Apr 18 19:06:31 1994
Дата индексирования: Sun Dec 23 18:50:26 2007
Кодировка:

Поисковые слова: созвездие лебедя
Model Updating



Next: Merits Up: Model Updating in the Previous: Introduction

Model Updating

The entropy expression generally accepted in image restoration is (in 1-D notation)

where represents the image to be restored, and is the model or prior estimate of the image.

The model in the entropy expression plays a special role in MEM image restoration. In most cases, e.g., using the data constraint, the MEM solution is unique, depending on the model. More specifically, once the model is given, the solution is unique. However, any change of the model will result in the change of the solution even if the data constraints remain the same. This will be shown specifically in the next section.

Let us consider the following specific formulation in the MEM algorithm. The entropy of an image defined as in Eq. (1) is maximized subject to the data fitting constraint in the image domain and the total power constraint where represents the data, i.e., degraded image, the noise variance, and () the PSF (point spread function) combined with the ICF (intrinsic correlation function); the subscript c stands for critical value, and ob for observational value.

The method is implemented by forming the objective function and using the Newton method to find ME solutions, i.e., to maximize for particular values of the Lagrange multipliers and (inner iteration). The desired values of and are achieved by choosing appropriate and (outer iteration). In what follows we will concentrate on and pay minimum attention to since the total power constraint is of little interest in the problem.

The iterates are updated according to where is an optimal step determined by a one-dimensional search. In order to calculate the change , we need to solve a set of linear equations of very large size: where the gradient and matrix elements are calculated from , i.e., the current and given by, respectively,

and

This is a time consuming operation.

In the underlying algorithm for the program, the non-diagonal elements of the PSF matrix are simply ignored in calculating the Hessian under the assumption that the diagonal elements dominate, so that the matrix becomes a diagonal one: where In this way the solution of the equations becomes a simple operation, specifically,

The scheme described above is attractive because of its simplicity. However, it has the following drawback: At the beginning of iteration, is zero and . Normally the model is a flat image. In order to reduce the value of , gradually steps up in the course of the iteration as depicted schematically by the lines located in the upper part of Fig. 1. As such, it is fairly easy to find the ME solutions for the particular 's at the early stage of iteration since the ignored non-diagonal elements in Eq. (3) are relatively insignificant. However, as the iteration advances, it becomes more and more difficult to find the ME solutions for the particular 's because the ignored non-diagonal elements (proportional to ) become larger and larger relative to the diagonal elements (depending partly on ). (Of course, this kind of difficulty will be encountered by any algorithm - generally speaking, convergence slows down at the later stage of iteration.)

From the above analysis we see that if the values of can be reduced, then the convergence will be faster. Can we do it? To achieve the desired value of , we need to increase to a certain value. and are in a one-to-one relationship. At first glance, it seems to be impossible.

The key of the new technique is to update the model in the iteration. At the beginning of the iteration, as usual, and . is increased to some value to start the first outer iteration, and an ME solution is obtained at the end. Then this ME solution is used as the model to start the second outer iteration. Accordingly, is increased to some value different from zero but not from the value used in the first outer iteration. The later outer iterations proceed in the same manner. In this way will jump up and down, but not progressively step up in the course of the iterations, as depicted by the thick lines located in the lower part of Fig. 1. As a consequence, the values of are kept small. Better approximation in the solution of the equations is achieved, and therefore the convergence is speeded up.

In the program cmem a parameter called m_update is used to control the frequency of model updating. Specifically, the model will be updated every m_update'th outer iteration. Setting m_update to 1 realizes the above described model updating procedure and achieves the most rapid convergence.



Next: Merits Up: Model Updating in the Previous: Introduction


rlw@sundog.stsci.edu
Mon Apr 18 10:53:51 EDT 1994