Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.adass.org/adass/proceedings/adass00/P1-32/
Дата изменения: Tue May 29 19:49:30 2001 Дата индексирования: Tue Oct 2 05:43:46 2012 Кодировка: Поисковые слова: galactic center |
Chandra data are processed, archived, and distributed by the Chandra X-ray Center (CXC). Standard Data Processing is accomplished by dozens of ``pipelines'' designed to process specific instrument data and/or generate a particular data product. Pipelines are organized into levels and generally require as input the output products from earlier levels. Some pipelines process data by observation while others process according to a set time interval or other criteria. Thus, the processing requirements and pipeline data dependencies are very complex. This complexity is captured in an ASCII processing registry which contains information about every data product and pipeline. The Automatic Processing system (AP) polls its input directories for raw telemetry and ephemeris data, pre-processes the telemetry, kicks off the processing pipelines at the appropriate times, provides the required input, and archives the output data products.
A CXC pipeline is defined by an ASCII profile template that contains a list of tools to run and the associated run-time parameters (e.g., input/output directory, root-names, etc.). When a pipeline is ready to run, a pipeline run-time profile is generated by the profile builder tool, pbuilder. The run-time profile is executed by the Pipeline Controller, pctr. The pipeline profiles and pctr support conditional execution of tools, branching and converging of threads, and logfile output containing the profile, list of run-time tools, arguments, exit status, parameter files, and run-time output. This process is summarized in Figure 1.
CXC pipeline processing is organized into different levels according to the extent of the processing. Higher levels take the output of lower levels as input. The first stage of processing is Level 0 which de-commutates telemetry and processes ancillary data. Level 0.5 processing determines the start and stop times of each observation interval and also generates data products needed for Level 1 processing. Level 1 processing includes aspect determination, science observation event processing, and calibration. Level 1.5 assigns grating data coordinates to the transmission grating data. Level 2 processing includes standard event filtering, source detection, and grating data spectral extraction. Level 3 processing generates catalogs spanning multiple observations.
Figure 2 represents the series of pipelines that are run to process the Chandra data. Each circle represents a different pipeline (or related set of pipelines). Level 0 processing (De-commutation) will produce several data products that correspond to the different spacecraft components. Data from the various components of the spacecraft will follow different threads through the system. The arrows represent the flow of data as the output products of one pipeline are used as inputs to a pipe (or pipes) in the next level. Some pipelines are run on arbitrary time boundaries (as data are available) and others must be run on time boundaries based on observation interval start and stop times (which are determined in the level 0.5 pipe, OBI_DET).
The complete pipeline processing requirements for Chandra are very complex with many inter-dependencies (as can be seen in Figure 2). In order to run the pipelines efficiently in a flexible and automated fashion we configure the Automatic Processing system with a pipeline processing registry. We first register all the Chandra input and output data products. We can then capture the processing requirements and inter-dependencies by registering all the pipelines. Data products are registered with a File_ID, file name convention (using regular expressions), method for extracting start/stop times, and archive ingest keywords (detector, level, etc.). Pipelines are registered with a Pipe_ID, pipeline profile name, pbuilder arguments, kickoff criteria (detector in focal plane, gratings in/out, etc.), input and output data products (by File_ID), and method for generating the ``root'' part of output file names.
With a processing registry, the Automatic Processing system is able to recognize data products, extract start and stop times, initiate pipeline processing, and ingest products into the archive. Figure 3 illustrates the flow of data through the AP system.
Here is a brief description of each of the AP components in Figure 3:
Subramanian, S. 2001, this volume, 303
Rose, J. 2001, this volume, 325