Controlling MPI Process Placement

The mpirun command controls how the ranks of the processes are allocated to the nodes of the cluster. By default, the mpirun command uses group round-robin assignment, putting consecutive MPI process on all processor ranks of a node. This placement algorithm may not be the best choice for your application, particularly for clusters with symmetric multi-processor (SMP) nodes.

Suppose that the geometry is <#ranks> = 4 and <#nodes> = 2, where adjacent pairs of ranks are assigned to each node (for example, for two-way SMP nodes). To see the cluster nodes, enter the command:

cat ~/mpd.hosts

The results should look as follows:

clusternode1
clusternode2

To equally distribute four processes of the application on two-way SMP clusters, enter the following command:

mpirun –perhost 2 –n 4 ./myprog.exe

The output for the myprog.exe executable file may look as follows:

Hello world: rank 0 of 4 running on clusternode1
Hello world: rank 1 of 4 running on clusternode1
Hello world: rank 2 of 4 running on clusternode2
Hello world: rank 3 of 4 running on clusternode2

Alternatively, you can explicitly set the number of processes to be executed on each host through the use of argument sets. One common use case is when employing the master-worker model in your application. For example, the following command equally distributes the four processes on clusternode1 and on clusternode2:

mpirun –n 2 –host clusternode1 ./myprog.exe : -n 2 -host clusternode2 ./myprog.exe

See Also

You can get more details in the Local Options topic of the Intel® MPI Library Reference Manual for Linux* OS.

You can get more information about controlling MPI process placement online at Controlling Process Placement with the Intel® MPI Library.