Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.sao.ru/fetch/HL/11/FAQ.html
Дата изменения: Mon Dec 23 18:23:59 2013
Дата индексирования: Fri Feb 28 09:08:21 2014
Кодировка:

Поисковые слова: п п п п п п п п п п п п п п п п п п п п п п п
HyperLEDA documentation
leda Frequently Asked Questions (about the FITS archive) site

| HyperLeda home | Documentation | Search by name | Search near a position | Define a sample | SQL Search |

What is a dataset?
A dataset is a set of frames taken with the same instrumental setup. ie. same telescope, same CCD, same grating, slit width...
Some variations in the setup are accepted in a given dataset depending on the context: In broad Band Imaging, the observations taken through different filters (U B V R and I) belong to the same dataset. In Long Slit Spectroscopy, the position angle of the slit may be changed in a dataset.

What data are archived in the HFA?
The HFA stores files as they are provided, ie., it can be raw observation files or pre-reduced files (flatfielded and wavelength calibrated). When raw observations are archived, their detailed description (keywords) allows to calibrate them: There is an indication on how to make the flat-field correction, how to resample to wavlength...

What is an HyperLeda pipe?
HYPERCAT is developping a pipeline processing of catalogue or fits data. The raw values are taken from the storage, are processed and are delivered. The processing is yet very restricted, but we intend to include very sophisticated procedure. Going up to the kinematical analysis, and new methods combining the Chemical and Kinematical analysis. The development of these procedures will be the most proeminent contribution of HyperLeda. The pipeline procedure are documented here.

Where are stored HyperLeda Data?
HyperLeda catalogues are contributions by members of the HyperLeda team, they are maintained individually in their home institutes. Automatic procedures assemble them over the network to produce dayly refresh versions of the database. The database is in turn mirrored to different places (Lyon and Napoli). The HFA is only stored in a central machine in Lyon and mirrored to the HyperLeda mirrors. There is yet 2 Go of data available, will be extended to 5 Go by December and 20 Go next year (possibly 100 Go at the end of 1999 if we get the financial support). There is probably no major problem about distributing the whole archive between different site. Mirroring the whole may be a problem, given the size involved.

What do I have to do to include my dataset in the Fits archive?
We have adopted a file naming convention and a consistent set of keywords. You have to conform this norm, which is not a big constrain, and then physically move the dataset to the HyperLeda system. The PLEINPOT software provides procedures to normalize and verify the headers, see the manual describing the archiving procedures.

What is PLEINPOT, how can I use it?
See the manual and the short introductory tutorial


HyperLeda Questions: leda@univ-lyon1.fr