Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.atnf.csiro.au/people/pulsar/reduce/fbtime/automatic.html
Дата изменения: Unknown
Дата индексирования: Tue Dec 25 16:52:49 2007
Кодировка:

Поисковые слова: п п п п п п п п п п п
Automatic Reduction of Filterbank Data

Automated Reduction of Filterbank Data

This is an extremely simple procedure and works the majority of the time. The aim of which is to produce an archive. It is basically a two step process.
  • The first step in reducing your data manually is to find some free disk space to offload your data. You will typically need more than 500Mb. Then you need to set the environment variable tpool to point to the directory you intend to unload the data into.


  • As a safety precaution when you are getting data off tape you should run disk_master, it needs sc_td as a command line argument in this case. This program should be run from the directory into which the data is going and on the machine to which the exabyte drive you are using is connected. It controls the exabyte tape reading program and stops it when there is less than 100Mb of space left on the disk you are writing to and will restart it when there is greater than 200Mb free.


  • You are now ready to extract your data from the exabyte tape. This is done using sc_td. However you must make sure that you are in the directory where you want the data to go before you run sc_td, this can be achieved most simply, if you have set the environment variable tpool from the step above, by typing cd $tpool. Now type sc_td, you will be asked a number of questions. The tape number it requires is the nrst number (usually 0) and typically you will want to unload all systems with no clipping, no skipping and to create a summary file. Files will begin to be unloaded, they are usually quite big so it may be a while before the first one is ready for you to proceed. As the data is being read off tape the file is called file.tmp and when it is finished there will be two files file.dat and file.hdr.


  • A csh script called tarchfb_driver, which requires no inputs, can be run from the directory containing the data while the data is still being extracted. This script launches the jobs, which are discussed in the archive files, automatically. It then moves them to the directory pointed to by the environment variable $fbankpool, and deletes the raw data. This program controls the processing and many data files may be processed at once on different machines. However before this can be run succesfully you will need a file called double.psr in the directory where you are working. A copy of this file lives in /psr/soft/tas/fch3/ on Kepler.
  • In order to run tarchfb_driver you need to have set the environment variable tpool to point to the location of the data. You then need to create a file called tarch_slaves which contains the names of the machines which you are going to be using to reduce your data. The standard format is shown below.

    Format of tarch_slaves


    atlas
    kepler
    venice

    Note that processing these data can be very CPU intensive so choose carefully which machines to include in tarch_slaves.
    You are now ready to run tarchfb_driver. The completed archives will be placed in the directory $fptmpool and the raw data will be deleted. If the processing fails for any reason then the data is shifted to the failed directory along with a log message which will give some clue as to why the processing failed. For more information on why the processing may fail see the errors section.
    Note that it is not necessary to create a failed directory for tarchfb_driver, if it does not exist it will create it itself.
  • All archives should then be moved to the correct pulsar directory in /psr3/data/timing/ on Kepler. There are two ways to achieve this, either by moving them by hand, or by using archsort. If you only have a couple of files then it is probably just as easy to move them by hand. However when there are a large number of files to move which is often the case in the $fptmpool directory. Archsort has two modes, the first is designed to be run on kepler and it retrieves archives from the $fptmpool directory on Pavo, sorts them and moves them to the correct directory. This mode is enabled by simply typing,

    archsort

    on Kepler. The second mode is for when you have accumulated a number of archives in another directory, such as $fptmpool at Epping. To enable this simply go to the directory where the archives are and type,

    archsort .

    this will set the current directory to be the directory from which to move archives.
    Back to the Filterbank data reduction page.
    Back to the main page.