Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.atnf.csiro.au/vlbi/dokuwiki/lib/exe/fetch.php/difx/haystack2011agenda/cappallo_haystack_status.pdf
Дата изменения: Wed Dec 7 23:52:20 2011
Дата индексирования: Tue Apr 12 12:40:49 2016
Кодировка:

Поисковые слова: scientist
Haystack correlation and postcorrelation
RO GER CAPPAL L O HAYSTACK DIFX M EETING 201 1 .1 2.5


Software Correlation
Gradual transfer over from Mk4 hardware correlator
Ў

A little more gradual than I'd like!

Use difx2mark4 file conversion software to interface

DiFX with the HOPS/Mk4 backend suite (fourfit, aedit, etc.) Continuing revision of code as various experiments and problems are encountered, principally from correlations at MPIfR and VLBA DiFX Plans:
Ў Ў

Integrate Mk6 Modify to support ALMA phasing

Recently acquired and integrated COTS hardware for a

72+ node DiFX-based software correlator


· Six Supermicro 3U server chassis, each having:
· X8DAH+-F motherboard · 2 hexacore Intel Xeon X5650 CPU's · 24 GB RAM · 1 TB hard disk · 40 Gb/s IB/HCA

· QLogic Infiniband switch
· 36 ports · 40 Gb/s

· Eight 10 Gb/s Infiniband adapter cards for the Mk5's · Total cost ~$34K



Haystack Server Cluster
computer sc01 sc02 sc08 sc09 corr01 corr02 corr03 corr04 corr05 corr06 chips 2 2 1 1 2 2 2 2 2 2 cores/ h-t Xeon chip cores model 4 4 4 4 6 6 6 6 6 6 8 8 4 8 24 24 24 24 24 24 E5405 E5405 E5462 E5630 X5650 X5650 X5650 X5650 X5650 X5650 clock GHz 2.00 2.00 2.80 2.53 2.67 2.67 2.67 2.67 2.67 2.67 cache MB 6 ­ L2 6 ­ L2 12 ­ L3 12 ­ L3 12 ­ L3 12 ­ L3 12 ­ L3 12 ­ L3 12 ­ L3 12 ­ L3


Mk5's on Haystack Correlators
# 00 5A 01 5A 02 5A 03 5A 04 5A 05 5A 06 5A 07 5B 08 5B 09 5B 10 11 12 13 5B 5B 5B 5B OS Debian Debian Debian Debian Debian Debian Debian RedHat RedHat Debian Debian Debian Debian Debian vers
4.1.1-21 4.1.1-21 4.1.1-21 4.1.1-21 4.1.1-21 4.1.1-21 4.1.1-21

IPP v7 v5 v7 v5 v5 v5 v5 v5 v5 v7 v7 v7 v7 v7

SDK 8.2 8.3 8.3 8.2 8.2 8.2 8.2 7.x 7.x 9.0 8.3 8.3 9.0 9.0

openmpi 1.4.3 1.4.3

gigE 10gE IB difx X X X X

1.4.3 1.4.3 1.4.3 1.4.3 1.4.3 1.4.3 1.4.3 1.4.3

X X X X X X X X X X X

X X X X X X X

3.2.2-5 3.2.3-47 5.0.8 4.0 4.0 5.0.8 5.0.5


InfiniBand Status
QLogic switch requires management (opensm) Using the pml cm (which seems to require identical hardware)

mpispeed achieves up to 23 Gb/s. With pml ob1 the speed drops to 13 Gb/s over the same path Haven't yet been able to transfer data with mpispeed from mk5's to servers
Ў Ў

10 Mb/s Mellanox HCA Х 40 Mb/s QLogic switch Need to rebuild openMPI to get missing BTL components

Hope to get cluster ready for operations this week





pml ­ point to point management layer cm ­ (Conner MacLeod) connection manager uses exposed matching fabrics ob1 ­ (Obi-Wan) high-perf pml implementation in openmpi; uses RDMA if possible btl ­ byte transfer layer