MPI: Using OpenMPI
Ignore the 'No IB Network Available Message' and run on eth0
mpirun --mca btl ^udapl,openib --mca btl_tcp_if_include eth0Run 3 copies of program1 using the openib, tcp and self BTLs for the transport of MPI messages with TCP using only the eth0 interface to communicate
mpirun -np 3 -mca btl openib,tcp,self -mca btl_tcp_if_include eth0 ./program1Run with some debugging info turned on
mpirun --mca btl_base_verbose 30 -np 2 -host NodeA,NodeB a.outRun with CPU Binding enabled
mpirun --mca mpi_paffinity_alone 1MIMD: e.g. lauching a job on Liverpool Uni. x8 Intel (96-cores)nodes; binding to core ON and x8 AMD (128 cores) nodes
mpirun -H aa,bb,cc,dd -np 96 -report-bindings -bycore -bind-to-core ./xhpl : -H ee,ff,gg,ii -np 128 ./xhplClean-up any stale processes and files left-over from Open MPI jobs on nodes in hostfile nodes_files
mpirun --pernode --hostfile nodes_files orte-cleanRedirecting standard IO
mpirun -H aa,bb,cc -np 2 my_app < my_imput > my_outputPassing hosts list from CLI
mpirun -np 3 --host a,b,c hostnameStrange one, but had a cluster with both neteffect 10GB and QDR IB, to ignore the 10GB and run on IB:
mpirun --mca btl ^nes0 --mca btl_openib_if_include mlx4_0 -np 2 ./IMB-MPI1 pingpongIntel Cluster, FDR and 10GB on the same PCI card (Mellanox) use mlx4_0:1
mpirun -np 32 --mca btl_openib_if_include mlx4_0:1 -machinefile ./machines-32 ./imb-ompi -npmin 32 allgatherRunning on chelsio cards with FDR QDR on the same system
mpirun -np 2 -machinefile ./machines-2c --mca mpi_paffinity_alone 1 --mca btl_openib_if_include cxgb4_0 ./imb-ompi