Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.sao.ru/fetch/HL/11/FAQ.html
Дата изменения: Mon Dec 23 18:23:59 2013 Дата индексирования: Fri Feb 28 09:08:21 2014 Кодировка: Поисковые слова: п п п п п п п п п п п п п п п п п п п п п п п |
Frequently Asked Questions (about the FITS archive) |
| HyperLeda home | Documentation | Search by name | Search near a position | Define a sample | SQL Search | |
What data are archived in the HFA?
The HFA stores files as they are provided, ie., it can be raw
observation
files or pre-reduced files (flatfielded and wavelength calibrated).
When raw observations are archived, their detailed description
(keywords)
allows to calibrate them: There is an indication on how to make the
flat-field correction, how to resample to wavlength...
What is an HyperLeda pipe?
HYPERCAT is developping a pipeline processing of catalogue or fits
data.
The raw values are taken from the storage, are processed and are
delivered.
The processing is yet very restricted, but we intend to include very
sophisticated procedure. Going up to the kinematical analysis, and new
methods combining the Chemical and Kinematical analysis. The
development
of these procedures will be the most proeminent contribution of
HyperLeda.
The pipeline procedure are documented here.
Where are stored HyperLeda Data?
HyperLeda catalogues are contributions by members of the HyperLeda
team, they
are maintained individually in their home institutes. Automatic
procedures assemble them over the network to produce dayly refresh
versions
of the database. The database is in turn mirrored to different places
(Lyon and Napoli). The HFA is only stored in a central machine in Lyon
and mirrored to the HyperLeda mirrors. There is yet 2 Go of data
available,
will be extended to 5 Go by December and 20 Go next year (possibly 100
Go
at the end of 1999 if we get the financial support). There is probably
no
major problem about distributing the whole archive between different
site.
Mirroring the whole may be a problem, given the size involved.
What do I have to do to include my
dataset in the Fits archive?
We have adopted a file naming convention and a consistent set of keywords.
You have to conform this norm, which is not a big constrain, and then
physically move the dataset to the HyperLeda system. The
PLEINPOT software
provides procedures to normalize and verify the headers, see the manual describing the archiving
procedures.
HyperLeda | Questions: leda@univ-lyon1.fr |