Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.adass.org/adass/proceedings/adass98/robertsda/
Дата изменения: Sat Jul 17 00:46:42 1999 Дата индексирования: Tue Oct 2 07:16:27 2012 Кодировка: Поисковые слова: iapetus |
The goal of the AIPS++ parallelization project is to provide an easy to use, high performance processing system that allows astronomers to processes large datasets. Specifically our project has the following goals:
An Algorithm-Applicator scheme has been chosen to deal with embarrassingly parallel (e.g., spectral line) problems. The Applicator is the controller, which sets up the problem, transports data to the Algorithm processes using the Message-Passing-Interface (MPI). MPI is a portable system that allows data and instructions to be sent to remote processors (either on the same machine or on different machines). Wes Young (see page 506 ) presented a demonstration of our first implementation of the algorithm applicator class to carry out spectral line deconvolution.
A graphical interface to the parallel system and batch processing has been implemented (see Fig. 1). The user only has to tell the application the number of processors to use. Submission to the batch queue is carried out with only one additional step.
Figure 2 shows the speed up of a deconvolution as a function of processors. Currently the parallel system shows good scaling up to 16 processors.
Power users can begin to use the parallel system on the NCSA computers after the first of the year. The parallel system will include parallelization of various spectral line processing: gridding, imaging, deconvolution (various flavors of CLEAN and MEM). The parallel infrastructure should work on a generic multiple processor machines.
Our future goals include parallelizing the AIPS++ IMAGER application to carry out imaging, (using parallelized gridding and FFT's) and parallel deconvolution. Also, we will be parallelizing a wide-field imaging algorithm (where the assumption that the sky can be represented at a single tangent plane breaks down). Finally, we will be increasing the user support to help astronomers, who need the large computational resources of NCSA, use the parallel AIPS++ system on the Origin2000. The first step of the increased user support has been to increase the network bandwidth from the VLA to NCSA by putting NCSA on the NRAO intranet.
Future projects also include a port of AIPS++ to NT, in order to take advantage of the large NT cluster now available at NCSA and increasingly available to astronomy departments because of their modest prices. We are also collaborating with the Pablo group at the University of Illinois to investigate the I/O patterns of AIPS++. Their group has instrumented our code and is now working to identify I/O patterns and statistics. This will be important to show where better performance due to increased I/O would be possible. The next generation of the MPI standard (MPI-2) includes a standard for MPI I/O. The MPI I/O standard is finalized and implementations are now available. We intend to explore its use as a complement to the parallel processing development.
We also are looking into integration of batch communication into the Glish IPC. Continue to use the parallel algorithm applicator to implement more algorithms in parallel.
In addition to post-observation processing, our project may include parallel on-line processing. The JCMT has decided in principle to use AIPS++ to process the backend of the new 1 GHz correlator. The correlator will create a data rate of about 10 MB/sec (up to about 3.8 GB could be collected in a 6 minute observation). They will have a network of about 8 to 16 workstations to carry out the processing.