next up previous index
Next: Dividing the Pie Up: Simple MPI Previous: Hello World

Greetings, Master

The ``Hello World'' program from section 8.2.1 ran in parallel, but participating processes did not exchange any messages, so the parallelism was trivial.

In this section we're going to have a look at our first non-trivial parallel program.

Here it is:

#include <stdio.h>
#include <string.h>
#include <mpi.h>

#define TRUE 1
#define FALSE 0
#define MASTER_RANK 0

main(argc, argv)
int argc;
char *argv[];
   int count, pool_size, my_rank, my_name_length, i_am_the_master = FALSE;
   char my_name[BUFSIZ], master_name[BUFSIZ], send_buffer[BUFSIZ],
   MPI_Status status;

   MPI_Init(&argc, &argv);
   MPI_Comm_size(MPI_COMM_WORLD, &pool_size);
   MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
   MPI_Get_processor_name(my_name, &my_name_length);

   if (my_rank == MASTER_RANK) {
      i_am_the_master = TRUE;
      strcpy (master_name, my_name);


   sprintf(send_buffer, "hello %s, greetings from %s, rank = %d",
           master_name, my_name, my_rank);
   MPI_Send (send_buffer, strlen(send_buffer) + 1, MPI_CHAR,
             MASTER_RANK, 0, MPI_COMM_WORLD);

   if (i_am_the_master) {
      for (count = 1; count <= pool_size; count++) {
         MPI_Recv (recv_buffer, BUFSIZ, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG,
                   MPI_COMM_WORLD, &status);
         printf ("%s\n", recv_buffer);

And here is the synopsis of the program:

Each process finds out about the size of the process pool, its own rank within the pool, and the name of the processor it runs on.
Process of rank 0 becomes the master process.
The master process broadcasts the name of the processor it runs on to other processes.
Each process, including the master process constructs a greating message and sends it to the master process. The master process sends the message to itself.
The master process collects the messages and displays them on standard output.
This is the way to organise I/O, if only certain processes can write to the screen or to files.

Let us compile and run this program on our SP:

gustav@sp19:../MPI 20:51:34 !511 $ mpcc -o hello hello.c
gustav@sp19:../MPI 20:52:01 !512 $ cat hello.ll
# @ job_type = parallel
# @ environment = COPY_ALL; MP_EUILIB=ip; MP_INFOLEVEL=3
# @ requirements = (Adapter == "hps_ip") && (Machine != "sp20") \
                   && (Machine != "sp18")
# @ min_processors = 4
# @ max_processors = 8
# @ class = test
# @ notification = never
# @ executable = /usr/bin/poe
# @ arguments = hello
# @ output = hello.out
# @ error = hello.err
# @ queue
gustav@sp19:../MPI 20:52:06 !513 $ llsubmit hello.ll
submit: The job "sp19.106" has been submitted.
gustav@sp19:../MPI 20:52:11 !514 $ cat hello.out
hello, greetings from, rank = 0
hello, greetings from, rank = 1
hello, greetings from, rank = 3
hello, greetings from, rank = 2
hello, greetings from, rank = 4
hello, greetings from, rank = 5
gustav@sp19:../MPI 20:52:45 !515 $

Now let us explain in more detail what happens here.

When you look at an MPI program and try to trace its logic, think of yourself as one of the processors.

And so, you begin execution and the first statement that you encounter is

   MPI_Init(&argc, &argv);
What this statement tells you is that you are not alone. There are others like you, and all of you comprise a pool of MPI processes. How many there are in that pool altogether? To find out you issue the command
   MPI_Comm_size(MPI_COMM_WORLD, &pool_size);
which, translated into English means:
How many processes there are in the default communicator, which is guaranteed to encompass all processes in the pool, MPI_COMM_WORLD? Please put the answer in the variable pool_size.
When this function returns you know how many colleagues you have. But the next pressing question is: how can you distinguish yourself from the others? Are you all alike? Are you all indistinguishable?

When processes are born, each process is born with a different number, much the same as each human is born with different DNA and different fingerprints. That number is called a rank number, and if you are an MPI process you can find out what your rank number is by calling function:

   MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
The English translation of this call is:
What is my rank number in the default communicator MPI_COMM_WORLD? Please put the answer in the variable my_rank.

A process such as yourself can belong to many communicators. You always belong to MPI_COMM_WORLD, but within the world you can have many sub-worlds, or, let's call it states. If you have multiple citizenships, you will also have multiple tax numbers, or multiple social security numbers, that would distinguish you from other citizens of those states. By the same token a process that belongs to many communicators may have different a different rank number in each of them, so when you ask about your rank number you must specify a communicator too.

OK, by now you know how many other processes there are in the pool, and what is your rank number within that pool. You can also find the name of the processor that you yourself run on, and this is done in a way that you've already seen in section 8.2.1. You call function:

   MPI_Get_processor_name(my_name, &my_name_length);
which translated to English means:
What is the name of the processor that I run on? Please put the name in the variable my_name and put the length of that name in my_name_length.

So far every process in the pool would have performed exactly the same operations. There has been no communication between you guys yet. But now you all check if your rank number is the same as a predefined MASTER_RANK number. Who defines what the MASTER_RANK number is? In this case it is the programmer, the God of MPI processes. But on some systems all processes may go through additional environmental enquiries and check for the existence of a host process or processes which can do I/O, and so on, and then jointly decide on which is going to be the MASTER.

Well, here the MASTER has been annointed by God.

Only one process will discover that he or she is the annointed one. That one process will place TRUE in the i_am_the_master variable. For all other processes that variable will remain FALSE. This one process will laboriously copy its name into the variable master_name. For all other processes that string will remain null.

But all other processes will know that they are not the master, and they will know who the master is, because by now they all know that their rank is not MASTER_RANK.

At this stage all processes that are not the master subject themselves to receiving a broadcast from the master. All processes, including yourself (regardless of whether you are the master or not), perform this operation at the same time, and all of them end up with the same message in the variable master_name. This message is the name of the processor the master process runs on. The name has been copied from the variable master_name of the master process and written on variables called master_name that belong to other processes. The MPI machine will have done all that.

This operation is accomplished by calling:

In plain English the meaning of this call is as follows:
Copy BUFSIZ data items of type MPI_CHAR from a buffer called master_name that is managed by process whose rank is MASTER_RANK within the MPI_COMM_WORLD communicator, to which I must belong too, to my own buffer also called master_name.

At this stage whether you are a slave process or a master process you are very knowledgeable about your MPI_COMM_WORLD universe. And, if you are a slave process, you are prudent enough to prepare and send a congratulatory message to the master process. And so first you write the message on your send_buffer:

   sprintf(send_buffer, "hello %s, greetings from %s, rank = %d",
           master_name, my_name, my_rank);
And observe that you write this message even if you are the master. Well there is nothing wrong with congratulating yourself. Some people do it all the time.

Having prepared the message you send it to the master process, and if you are the master process you send it to yourself, which is fine too. Some people seldom receive messages from anyone else.

Here is how you will have accomplished this task:

   MPI_Send (send_buffer, strlen(send_buffer) + 1, MPI_CHAR,
             MASTER_RANK, 0, MPI_COMM_WORLD);
In plain English the meaning of this operation is as follows:
Send strlen(send_buffer) + 1 data items (don't forget about the terminating null character, for which function strlen does not account) of type MPI_CHAR, which have been deposited in send_buffer to a process whose rank is MASTER_RANK. Attach a tag 0 to that message (to distinguish it from other messages that the master process may receive from elsewhere, perhaps). The ranking and communication refer to the MPI_COMM_WORLD communicator.

If you are a slave process then this is about all that you are supposed to do in this program, so now you can relax and spin, or go home.

But if you are a master process you have to collect all those messages that have been sent to you and print them on standard output in the receive order.

How many messages are you going to receive, master? There will be pool_size messages sent to you from all processes including yourself. So you can just as well enter a for loop and receive all those pool_size messages, knowing, when you count the last one, that your job is done too.

To receive a message you do as follows:

             MPI_COMM_WORLD, &status);
which in plain English means:
Let me receive up to BUFSIZ data items of type MPI_CHAR into my array recv_buffer from any source (MPI_ANY_SOURCE) and with any tag (MPI_ANY_TAG) within the MPI_COMM_WORLD. The status of the received message should be written on structure status.

It is possible to find out a lot about a message before you are going to receive it. You can find how long it is, where it comes from, what type are data items inside the message, and so on. But in this case the master process doesn't bother. The logic of the program is simple enough. God, i.e., the programmer, told the master process to receive pool_size messages, so receive them it shall. And it shall it print them on standard output as it receives them.

Once this point in the program is reached, all processes hit MPI_Finalize, which is the end of the world for them.

And the beginning of the debugging process for the Programmer.

next up previous index
Next: Dividing the Pie Up: Simple MPI Previous: Hello World
Zdzislaw Meglicki