Документ взят из кэша поисковой машины. Адрес оригинального документа : http://classic.chem.msu.su/gran/gamess/linux_lam.html
Дата изменения: Wed Mar 18 15:44:12 2009
Дата индексирования: Mon Oct 1 19:51:00 2012
Кодировка:
PC GAMESS/Firefly DOCUMENTATION - running in parallel using LAM/MPI binaries/Linux

How to run the PC GAMESS/Firefly on Linux in parallel - LAM/MPI case:


To run Linux/LAM/MPI version of the PC GAMESS/Firefly in parallel you'll need:



How to run the PC GAMESS/Firefly in parallel?


The simplest command line for the parallel PC GAMESS/Firefly run is as follows:

      pcgamess DIR0 DIR1 DIR2 ... DIRN

Here, DIR0, DIR1, DIR2, etc... are the working directories of the master PC GAMESS/Firefly process (i.e., of MPI RANK=0), second instance of PC GAMESS/Firefly (MPI RANK=1), third instance, and so on. Only absolute paths are allowed.

For example, you can use something like following:

      pcgamess /home/me/mydir/wrk0 /home/me/mydir/wrk1 "/home/me/my dir/wrk2"

Depending on the cluster topology used, the three directories above must exist prior to PC GAMESS/Firefly execution either on the single computer, two different computers, or three different computers. The input file must be in the master working directory (i.e., in the /home/me/mydir/wrk0 for the example above).

You have to use either mpiexec or mpirun command to launch the PC GAMESS/Firefly in parallel. In the latter case, you must first manually load the LAM/MPI runtime environment using proper lamboot command.

Before launching the PC GAMESS/Firefly in parallel, put fastdiag.ex, pcgp2p.ex, and p4stuff.ex (if any) runtime extension files into all the temporary working directories to be used by the parallel PC GAMESS/Firefly job.

There are two different ways of how you can start the PC GAMESS/Firefly in parallel with mpirun


Known issues and problems


  1. While running PC GAMESS/Firefly in parallel using standalone SMP system, the performance degradation is possible because of simultaneous I/O operations. In this case, the use of high-quality RAID or separate physical disks can help. If the problem persist, for dual- (and more, 4, 8, for example)-CPUs/cores SMP/multicore systems the better solution is probably to switch to the direct computation methods which require much less disk I/O.

  2. The default value for AOINTS is DUP. It is probably optimal for low-speed networks (10 and 100 Mbps Ethernet). On the other hand, for faster networks and SMP systems the optimal value could be AOINTS=DIST. You can change the default by using the AOINTS keyword in the $SYSTEM group. So, you can check what is the faster way for your systems.

  3. There are four keywords in the $SYSTEM group which can help in the case of MPI-related problems. Do not modify the default values unless you are absolutely sure that you need to do this. They are as follows:

            MXBCST (integer) - the maximum size (in DP words) of the message
                               used in broadcast operation. Default is 32768.
                               You can change it to see whether this helps
    
            MPISNC (logical) - activates the strategy when the call of the
                               broadcast operation will periodically
                               synchronize all MPI processes, thus freeing
                               wp4 global memory pool.
                               Default is false. Setting it to true should
                               resolve most buffer-overflow problems by the
                               cost of somewhat reduced performance.
    
            MXBNUM (integer) - the maximum number of broadcast operations
                               which can be performed before the global
                               synchronization call is done.
                               Relevant if MPISNC=.true. Default is 100.
    
            LENSNC (integer) - the maximum total length (in DP words) of all
                               messages which can be broadcasted before the
                               global synchronization call is done.
                               Relevant if MPISNC=.true. Default is dependent
                               on the number of processes used (meaningful values
                               vary from 20000 to, say, 262144 or even more).
    



See also:



Last updated: March 18, 2009