Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.stsci.edu/documents/dhb/web/c06_foccalib.fm3.html
Дата изменения: Tue Nov 18 00:20:12 1997
Дата индексирования: Sat Dec 22 14:49:26 2007
Кодировка:

Поисковые слова: arp 220
[Top] [Prev] [Next] [Bottom]

6.3 Reasons to Recalibrate

FOC data files retrieved from the Archive were calibrated with the best calibration reference files available at the time the data were taken. You can use StarView, as described in Chapter 1, to determine both the reference files used in the original observation and the reference files now considered the best for calibrating that observation. (See FOC ISR 082 for a complete listing of calibration reference files.). However, discrepancies between these lists do not always mean that it is necessary to recalibrate, because the effect on the data might be merely to redistribute the noise slightly rather than to add anything significant to the signal. It is worth emphasizing that there are very few situations where recalibration will significantly improve FOC science data. FOC calibration files do not change frequently, and the changes that do occur tend to be minor.

The five reasons why a user might want to recalibrate FOC data relate to:

6.3.1 Absolute Sensitivity Keywords

You can account for changes in sensitivity information without recalibrating the data. Instead you can run tasks in the synphot package using the PHOTMODE relevant to the data. For example, suppose you want to redetermine the absolute sensitivity of exposure x28t0203t, a 256 x 256 f/96 image taken in February 1994, shortly after COSTAR was inserted. At that time, the COSTAR keyword was not correctly inserted into the PHOTMODE string, nor was the format-dependent sensitivity correctly recorded. The PHOTMODE for this particular observation is "FOC F/96 F2ND F1ND F346M", whereas it should read "FOC F/96 COSTAR F2ND F1ND F346M X96N256". Also, the HISTORY records show that the pre-COSTAR DQE file was used (foc_96_dqe_003.tab) rather than the in-flight calibrated foc_96_dqe_004.tab. The resulting inverse sensitivity in the header was

PHOTFLAM =         7.635416E-17 / Inverse Sensitivity

Recalculating using the bandpar task in the synphot package with the correct PHOTMODE and the most recent DQE file gives:

PHOTFLAM =	                 7.811949E-17

(Note that the URESP parameter that bandpar calculates is identical to PHOTFLAM.) The difference is not large, but it consists of a 25% increase, due to the inclusion of the format dependent sensitivity for the 256 x 256 format, and a 23% decrease, due to the inclusion of the COSTAR mirror reflectivities. The effect of the updated DQE curve is negligible at that wavelength.

Recalibrating the absolute sensitivity keywords is slightly more tricky for pre-COSTAR data, because you must then tell synphot that COSTAR is not in the beam and to use the pre-COSTAR absolute sensitivity file. The first item is simple to deal with: just insert the value "nocostar" in the PHOTMODE string, e.g.:

band(foc,f/96,nocostar,f486n,x96n256)

The second item is more difficult to address: you must edit the HST component table available through the calibration reference file screens in StarView (see "Identifying Calibration Reference Files" on page 1-19). The most straightforward way to proceed is to tcopy the component table to a local working directory, tedit the file so that the COMPNAME foc_96_dqe (on line 605 or so) has the FILENAME crfoccomp$foc_96_dqe_003.tab, and then write the edited file to a new version with a different name. Then the task refdata can be used to make a parameter file that has a component table that refers to the pre-COSTAR FOC sensitivity file. Subsequently, calcphot can be run with refdata pointing to that new parameter file.

6.3.2 Flatfields

When new flatfields based on new flatfield data are delivered, it might be profitable to recalibrate by reapplying the flatfield. However, the only new flatfield deliveries were those derived in the ultraviolet using the Orion nebula as a target and those constructed in 1990 using internal flatfields taken during the Science Verification phase immediately after the launch of HST. The new flats from March 1995 were basically the same as the old flats except geometrically corrected using the new geometric correction files.

6.3.3 Geometric Correction Files

Delivery of new geometric correction files often lures users into thinking that they need to recalibrate their data using the most up-to-date reference files. In fact, this correction is rarely necessary, because the main effect is in improving the astrometric accuracy of the data. The photometric quality barely changes, because the geometric correction algorithm rigorously conserves flux, so the new correction merely redistributes the noise. Users who need the utmost astrometric accuracy (e.g., for proper-motion studies) will want to take advantage of improved geometric calibration files. However, they will still be left with some time-dependent positional uncertainty (see page 7-9) unless they take their own internal flatfields and calibrate out the time dependence of the geometric distortion themselves.

6.3.4 Improved Pipeline Algorithms

The fourth item is a catch-all for those situations where STScI staff are able to improve on the pipeline correction algorithm. Such a situation occurred in November 1991, when the order of processing changed so that geometric correction is performed before flatfielding. A more thorough discussion of this change and the rationale behind it is described in FOC ISR 051. Note that all FOC files in the Archive reflect this change because the entire Archive has been reprocessed in the meantime.

6.3.5 User Calibrations

The last item is for those users who have decided that the pipeline calibration is not sufficient for their needs or has compromised the quality of the data. For example, 8-bit overflows in 512 x 1024 data can often be corrected by adding integral multiples of 256 to the pixel values in the .d0h file until the intensity distribution is correct. You cannot repair the pipeline-corrected data in this way because the geometric correction algorithm smooths the overflowed pixels and mixes them with their neighbors. In that case, you must repair the .d0h file first and then recalibrate.

Alternatively, you might need to flatfield using an unsmoothed flatfield. In that case, the images must be lined up very accurately so that features on the photocathode (reseau marks, blemishes etc.-see pages 4-7 through 4-10) divide out properly. Extreme care is required in order to avoid misalignment artifacts.



[Top] [Prev] [Next] [Bottom]

stevens@stsci.edu
Copyright © 1997, Association of Universities for Research in Astronomy. All rights reserved. Last updated: 11/13/97 16:43:34