Difference between revisions of "MPI: Using Intel MPI"
Jump to navigation
Jump to search
(Created page with "Quick notes: To be updated * Ensure ib0 is setup and working (i.e it has an IP address and you can ping other IB hosts) * Run using the <tt>-env I_MPI_DEVICE rdma</tt> * Setu...") |
|||
| (2 intermediate revisions by the same user not shown) | |||
| Line 77: | Line 77: | ||
2097152 20 544.45 3673.41 | 2097152 20 544.45 3673.41 | ||
4194304 10 1082.21 3696.15 | 4194304 10 1082.21 3696.15 | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | == /etc/security/limits.conf == | ||
| + | Note: if you have your mpd's started before updating this file, you will need to restart the mpds (<tt>mpdallexit</tt>) | ||
| + | |||
| + | <i>The Intel MPI Library does not propagate shell limits across the job. Whatever limits are valid at the time of the MPD ring startup on a particular node will be used by all instances of the Intel MPI Library jobs afterwards.</i> | ||
| + | <syntaxhighlight> | ||
| + | # ensure the following 2 lines are present | ||
| + | * soft memlock unlimited | ||
| + | * hard memlock unlimited | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | == Using mpdboot == | ||
| + | Start up the mpd process on all cluster nodes | ||
| + | <syntaxhighlight> | ||
| + | # create a hosts file, | ||
| + | mpdboot -n 16 -f ./hosts -r ssh -1 | ||
| + | mpdboot -n 16 -f ./hosts | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | Verify its working ok | ||
| + | <syntaxhighlight> | ||
| + | [mpiuser@compute000 intel64]$ mpdtrace | ||
| + | compute000 | ||
| + | compute004 | ||
| + | compute011 | ||
| + | .. | ||
</syntaxhighlight> | </syntaxhighlight> | ||
Latest revision as of 13:10, 28 January 2014
Quick notes: To be updated
- Ensure ib0 is setup and working (i.e it has an IP address and you can ping other IB hosts)
- Run using the -env I_MPI_DEVICE rdma
- Setup /etc/security/limits.conf, unlimited
- TODO: add the mpdboot command for IB
Run IMB over IB
[root@compute-0-0 src]# /share/apps/intel/impi/4.1.0.024/intel64/bin/mpirun -np 2 -machinefile ./hostsfile -env I_MPI_DEVICE rdma ./IMB-MPI1.mpi.intel pingpong
benchmarks to run pingpong
#---------------------------------------------------
# Intel (R) MPI Benchmark Suite V3.2.3, MPI-1 part
#---------------------------------------------------
# Date : Fri Nov 30 13:38:29 2012
# Machine : x86_64
# System : Linux
# Release : 2.6.32-220.13.1.el6.x86_64
# Version : #1 SMP Tue Apr 17 23:56:34 BST 2012
# MPI Version : 2.2
# MPI Thread Environment:
# New default behavior from Version 3.2 on:
# the number of iterations per message size is cut down
# dynamically when a certain run time (per message size sample)
# is expected to be exceeded. Time limit is defined by variable
# "SECS_PER_SAMPLE" (=> IMB_settings.h)
# or through the flag => -time
# Calling sequence was:
# ./IMB-MPI1.mpi.intel pingpong
# Minimum message length in bytes: 0
# Maximum message length in bytes: 4194304
#
# MPI_Datatype : MPI_BYTE
# MPI_Datatype for reductions : MPI_FLOAT
# MPI_Op : MPI_SUM
#
#
# List of Benchmarks to run:
# PingPong
#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
#bytes #repetitions t[usec] Mbytes/sec
0 1000 1.78 0.00
1 1000 1.83 0.52
2 1000 1.84 1.04
4 1000 1.83 2.09
8 1000 1.82 4.20
16 1000 1.82 8.38
32 1000 1.89 16.15
64 1000 1.93 31.66
128 1000 2.33 52.36
256 1000 3.37 72.50
512 1000 3.97 122.94
1024 1000 5.64 173.02
2048 1000 6.59 296.58
4096 1000 7.55 517.62
8192 1000 8.99 868.54
16384 1000 12.14 1286.70
32768 1000 13.50 2314.90
65536 640 21.91 2853.07
131072 320 38.66 3232.98
262144 160 72.10 3467.26
524288 80 142.64 3505.40
1048576 40 276.61 3615.16
2097152 20 544.45 3673.41
4194304 10 1082.21 3696.15/etc/security/limits.conf
Note: if you have your mpd's started before updating this file, you will need to restart the mpds (mpdallexit)
The Intel MPI Library does not propagate shell limits across the job. Whatever limits are valid at the time of the MPD ring startup on a particular node will be used by all instances of the Intel MPI Library jobs afterwards.
# ensure the following 2 lines are present
* soft memlock unlimited
* hard memlock unlimitedUsing mpdboot
Start up the mpd process on all cluster nodes
# create a hosts file,
mpdboot -n 16 -f ./hosts -r ssh -1
mpdboot -n 16 -f ./hostsVerify its working ok
[mpiuser@compute000 intel64]$ mpdtrace
compute000
compute004
compute011
..