Difference between revisions of "MPI: Using Platform MPI"
Jump to navigation
Jump to search
(Created page with "== Hosts File == When running on nodes (not through LSF), the hosts file needs to be in the format: <syntaxhighlight> host1:12 host2:12 </syntaxhighlight> Note, the normal nodename format (no :core...") |
|||
| Line 36: | Line 36: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | == Set Compiler for MPI == | |
'''GNU''' | '''GNU''' | ||
<syntaxhighlight> | <syntaxhighlight> | ||
| Line 57: | Line 57: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
| − | + | == Run without LSF == | |
<syntaxhighlight> | <syntaxhighlight> | ||
export MPI_USELSF=0 | export MPI_USELSF=0 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Latest revision as of 11:03, 15 October 2012
Hosts File
When running on nodes (not through LSF), the hosts file needs to be in the format:
host1:12
host2:12Note, the normal nodename format (no :cores) worked for up to 10 cores but bombed out with ssh errors at 11/12 cores.
Debugging
- Include the -prot flag to see what IP/communication
- Include the -d flag to turn debugging on
mpirun -d -prot -np 24 -hostfile ./hosts ./binaryRun on particular subnet (eth1)
Best to add to modulefile
setenv MPIRUN_OPTIONS "-prot subnet 10.41.32.0"Bind to CPU (affinity)
- Map first rank on each node to 0 (ppn=1)
-cpu_bind=v,MAP_CPU:0- Map 2 MPI procs to each physical CPU (on 6 core intels)
-cpu_bind=v,MAP_CPU:0,6Force to run on Eth (When IB Present)
mpirun -TCP -np 2 -hostlist "node1 node2" a.outSet Compiler for MPI
GNU
setenv MPI_CC /usr/bin/gcc
setenv MPI_CXX /usr/bin/g++
setenv MPI_F77 /usr/bin/g77
setenv MPI_F90 /usr/bin/gfortran
setenv CC /usr/bin/gcc
setenv FC /usr/bin/gfortranIntel
setenv MPI_CXX icc
setenv MPI_CC icc
setenv MPI_F77 ifort
setenv MPI_F90 ifort
setenv CC icc
setenv FC ifortRun without LSF
export MPI_USELSF=0