Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.stsci.edu/documents/dhb/web/c06_foccalib.fm3.html
Дата изменения: Tue Nov 18 00:20:12 1997 Дата индексирования: Sat Dec 22 14:49:26 2007 Кодировка: Поисковые слова: arp 220 |
The five reasons why a user might want to recalibrate FOC data relate to:
PHOTFLAM = 7.635416E-17 / Inverse SensitivityRecalculating using the bandpar task in the synphot package with the correct PHOTMODE and the most recent DQE file gives:
PHOTFLAM = 7.811949E-17(Note that the URESP parameter that bandpar calculates is identical to PHOTFLAM.) The difference is not large, but it consists of a 25% increase, due to the inclusion of the format dependent sensitivity for the 256 x 256 format, and a 23% decrease, due to the inclusion of the COSTAR mirror reflectivities. The effect of the updated DQE curve is negligible at that wavelength.
Recalibrating the absolute sensitivity keywords is slightly more tricky for pre-COSTAR data, because you must then tell synphot that COSTAR is not in the beam and to use the pre-COSTAR absolute sensitivity file. The first item is simple to deal with: just insert the value "nocostar" in the PHOTMODE string, e.g.:
band(foc,f/96,nocostar,f486n,x96n256)The second item is more difficult to address: you must edit the HST component table available through the calibration reference file screens in StarView (see "Identifying Calibration Reference Files" on page 1-19). The most straightforward way to proceed is to tcopy the component table to a local working directory, tedit the file so that the COMPNAME foc_96_dqe (on line 605 or so) has the FILENAME crfoccomp$foc_96_dqe_003.tab, and then write the edited file to a new version with a different name. Then the task refdata can be used to make a parameter file that has a component table that refers to the pre-COSTAR FOC sensitivity file. Subsequently, calcphot can be run with refdata pointing to that new parameter file.
6.3.2 Flatfields
When new flatfields based on new flatfield data are delivered, it might be profitable to recalibrate by reapplying the flatfield. However, the only new flatfield deliveries were those derived in the ultraviolet using the Orion nebula as a target and those constructed in 1990 using internal flatfields taken during the Science Verification phase immediately after the launch of HST. The new flats from March 1995 were basically the same as the old flats except geometrically corrected using the new geometric correction files.6.3.3 Geometric Correction Files
Delivery of new geometric correction files often lures users into thinking that they need to recalibrate their data using the most up-to-date reference files. In fact, this correction is rarely necessary, because the main effect is in improving the astrometric accuracy of the data. The photometric quality barely changes, because the geometric correction algorithm rigorously conserves flux, so the new correction merely redistributes the noise. Users who need the utmost astrometric accuracy (e.g., for proper-motion studies) will want to take advantage of improved geometric calibration files. However, they will still be left with some time-dependent positional uncertainty (see page 7-9) unless they take their own internal flatfields and calibrate out the time dependence of the geometric distortion themselves.
6.3.4 Improved Pipeline Algorithms
The fourth item is a catch-all for those situations where STScI staff are able to improve on the pipeline correction algorithm. Such a situation occurred in November 1991, when the order of processing changed so that geometric correction is performed before flatfielding. A more thorough discussion of this change and the rationale behind it is described in FOC ISR 051. Note that all FOC files in the Archive reflect this change because the entire Archive has been reprocessed in the meantime.
6.3.5 User Calibrations
The last item is for those users who have decided that the pipeline calibration is not sufficient for their needs or has compromised the quality of the data. For example, 8-bit overflows in 512 x 1024 data can often be corrected by adding integral multiples of 256 to the pixel values in the .d0h file until the intensity distribution is correct. You cannot repair the pipeline-corrected data in this way because the geometric correction algorithm smooths the overflowed pixels and mixes them with their neighbors. In that case, you must repair the .d0h file first and then recalibrate.