Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.adass.org/adass/proceedings/adass97/pirenneb.html
Дата изменения: Fri May 15 22:17:29 1998 Дата индексирования: Tue Oct 2 02:41:21 2012 Кодировка: Поисковые слова: п п п п п п п п п п |
Next: OPUS-97: A Generalized Operational Pipeline System
Up: Dataflow and Scheduling
Previous: Pipeline Processing and Quality Control for Echelle Data
Table of Contents -- Index -- PS reprint -- PDF reprint
B. Pirenne and A. Micol
Space Telescope - European Coordinating Facility, ESO, Garching, D-85748
D. Durand and S. Gaudet
Canadian Astronomy Data Centre, DAO, Victoria, BC
HST observations are normally calibrated and stored in the archive immediately after reception by the ground system. The calibration can only be executed using the calibration reference files which are available at that time. Some may be missing or may not be the most appropriate ones as new calibration observations might be obtained later on. Moreover, the calibration software of those instruments hardly ever stabilizes. In other words, the longer one waits before calibrating an observation, the better the results should be.
This is the concept that we decided to implement. The recipe is simple in principle: recalibrate only at the time of the data delivery to the user. This is ``Just-in-time'' HST data!
The implementation is best explained by considering the data flow model presented in Figure 1.
In this model, users perform the traditional catalogue browsing activities: selection of data according to search criteria, examination of the results, refinement of the selection etc., until the proper set of observations has been identified from the science database, perhaps helped by looking at quick-look samples of the data. Subsequently, users will mark those records for retrieval and will typically select the on-the-fly reprocessing of the datasets.
After proper identification, selection of the output media etc, the request is registered in the archive database. Then, a first automatic process reads the files required from the data repository, while a second process starts the actual re-calibration.
Currently, this processing step involves the selection of the best calibration reference files using a specialized database. This database relates the current best calibration reference files to any given HST observation. Using this information, the actual calibration reference files (flat, bias, dark, etc.) are retrieved from magnetic disks and applied to the science data being re-calibrated.
As soon as all exposures belonging to a request have been reprocessed, the data is made available to the user.
On-the-fly re-calibration has another advantage for our archive sites: we only need to keep the raw data and calibration reference files on-line for the process to be automatic. As a matter of fact, original calibration, engineering data and other auxiliary data need not be available to the re-calibration process and can remain on secondary storage.
The ST-ECF and CADC are currently working towards improving this system through two major new facilities.
The mere fact that the service is used a lot is for us both proof of the usefulness of the OTF concept and an encouragement to develop it further: The integration of more advanced processing steps (cosmic-ray removal, co-addition of frames) pushes OTF even further by allowing users to concentrate more on data analysis and leave the reduction tasks to automated procedures.
Micol A., Bristow P., & Pirenne B. 1998, this volume
Crabtree, D., Durand, D., Gaudet, S., & Pirenne, B., 1996,``The CADC/ST-ECF Archives of HST data: Less is More", in Astronomical Data Analysis Software and Systems V, ASP Conf. Ser., Vol. 101, eds. G. H. Jacoby and J. Barnes (San Francisco, ASP), 505
Next: OPUS-97: A Generalized Operational Pipeline System
Up: Dataflow and Scheduling
Previous: Pipeline Processing and Quality Control for Echelle Data
Table of Contents -- Index -- PS reprint -- PDF reprint