MPI: Using OpenMPI

From Define Wiki
Jump to navigation Jump to search

Ignore the 'No IB Network Available Message' and run on eth0

mpirun --mca btl ^udapl,openib --mca btl_tcp_if_include eth0

Run 3 copies of program1 using the openib, tcp and self BTLs for the transport of MPI messages with TCP using only the eth0 interface to communicate

mpirun -np 3 -mca btl openib,tcp,self -mca btl_tcp_if_include eth0 ./program1

Run with some debugging info turned on

mpirun --mca btl_base_verbose 30 -np 2 -host NodeA,NodeB a.out

Run with CPU Binding enabled

mpirun --mca mpi_paffinity_alone 1

MIMD: e.g. lauching a job on Liverpool Uni. x8 Intel (96-cores)nodes; binding to core ON and x8 AMD (128 cores) nodes

mpirun -H aa,bb,cc,dd -np 96 -report-bindings -bycore -bind-to-core ./xhpl : -H ee,ff,gg,ii -np 128 ./xhpl

Clean-up any stale processes and files left-over from Open MPI jobs on nodes in hostfile nodes_files

mpirun --pernode --hostfile nodes_files orte-clean

Redirecting standard IO

mpirun -H aa,bb,cc -np 2 my_app < my_imput > my_output

Passing hosts list from CLI

mpirun -np 3 --host a,b,c hostname

Strange one, but had a cluster with both neteffect 10GB and QDR IB, to ignore the 10GB and run on IB:

mpirun --mca btl ^nes0 --mca btl_openib_if_include mlx4_0  -np 2 ./IMB-MPI1 pingpong

Intel Cluster, FDR and 10GB on the same PCI card (Mellanox) use mlx4_0:1

mpirun -np 32 --mca btl_openib_if_include mlx4_0:1 -machinefile ./machines-32 ./imb-ompi -npmin 32 allgather

OR Using multiple cards/ports in RR fashion
mpirun -mca btl_openib_if_include “mlx4_0:1,mlx4_1:1”

Running on chelsio cards with FDR QDR on the same system

mpirun -np 2 -machinefile ./machines-2c --mca mpi_paffinity_alone 1 --mca btl_openib_if_include cxgb4_0 ./imb-ompi