Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.stsci.edu/instrument-news/handbooks/nicmos/c12_pipeline.doc3.html
Дата изменения: Thu Aug 7 20:42:46 1997
Дата индексирования: Tue Feb 5 12:26:44 2013
Кодировка:

Поисковые слова: п п п п п п р п р п п п
[Top] [Prev] [Next] [Bottom]

NICMOS Pipeline

The NICMOS calibration task is divided into two stages: calnica, which is used for every individual exposure, and calnicb, which is used after calnica on those exposures which comprise an association.

Static Calibrations-calnica

The first calibration stage, calnica, (Figure 13.1) performs those calibrations which can be done to a single exposure using the configuration information from its telemetry and the Calibration Data Base. The calibrations used in this stage are obtained from the Calibration Data Base. Such calibrations are derived from the calibration program (see Chapter 15) and typically change on time scales of months. This is analogous to the WFPC2 calibration process (calwp2). Calnica performs the following steps:

Figure 13.1: Conceptual calnica Pipeline

Observers will be given both the uncalibrated (raw) data and the processed data for each exposure. For MULTIACCUM observations, partially calibrated data for each readout will be generated (which excludes the cosmic ray and saturation corrections) in addition to a final single image.

To recalibrate NICMOS data, you will need the calnica software (soon to be included in the STSDAS distribution) and the necessary calibration reference files (available from the HST Data Archive using StarView).

The data processing flow chart for normal imaging and spectroscopic images is shown in Figure 13.2.

Figure 13.2: Calibration Steps of the calnica Pipeline

Contemporaneous Observations-calnicb

While previously it has been possible to execute multiple exposures from a single proposal logsheet line (e.g., WFPC2, CR-SPLIT, and NEXP=n constructs), this capability has been significantly expanded to support new requirements of the second generation science instruments. Typical examples include the removal of cosmic rays, the construction of a mosaic image, and the subtraction of the sky background from a sequence of on-target and off-target observations. These observations are distinguished by the fact that their calibration and processing depends upon other observations obtained at the same time.

The calnicb part of the pipeline carries out the calibration and merging of associated data frames, each of which has first been processed by calnica. In the case shown in Figure 13.3, the associated set has 3 individual datasets that are combined into one merged and calibrated dataset.

Figure 13.3: Conceptual calnicb Pipeline

We refer to these sets of exposures as associations. The calnicb task operates on an entire association, and produces one or more products from that set (Figure 13.3). In the case of dither patterns, calnicb reads from the headers of the individual exposure files what the telescope offsets were. It then identifies sources in the images, and, starting with the telescope pointing information from the headers as an initial guess, determines what the pointings actually were. It then combines the images into a final mosaic, rejecting from the output any Cosmic Rays that had not been detected in the individual exposures when they were processed by calnica. The calnicb code uses all the data quality information generated by calnica to avoid propagating identified Cosmic Rays, bad pixels, or saturated pixels, into the output mosaic. In the case of chopped images, if multiple images were obtained at each chop position, calnicb generates a mosaic for each background position and outputs each of those; it then combines each of the background images to generate a background for the target position, and removes this background from the mosaic it has generated for the target position, and then outputs the result of this operation as the final, background subtracted mosaiced image of the target. When calnicb is calculating the offsets between images, it starts with the telescope pointing information as its first guess. If it is unable to match the various images by adjusting this pointing information by more than some limit, the code reverts to using the pointing information alone, on the assumption that there are no sources bright enough to detect in the individual images, or that there are instrumental artifacts which are confusing the offset calculation, or that the telescope suffered a loss of guide star lock during one or more of the exposures resulting in a potentially very large offset being introduced. To date, we have found that all of these things happen very rarely indeed.



[Top] [Prev] [Next] [Bottom]

stevens@stsci.edu
Copyright © 1997, Association of Universities for Research in Astronomy. All rights reserved. Last updated: 07/24/97 15:36:19