next up previous index
Next: Shared Memory Paradigm Up: Parallel Programming Previous: Data Parallel Paradigm

Message Passing Paradigm

Message passing  parallel programming paradigm is very flexible, universal, can be highly efficient, and is absolutely ghastly to use. But since the former three outweigh the latter, message passing wins and is currently used by most parallel production codes.

As was the case with data parallel programming, message passing programming is a paradigm and can be implemented on systems with various architectures. You can use message passing on clusters, on SMPs, even on single CPU machines and on fancy supercomputers like Cray T3E and Cray X1.

In message passing paradigm your computer program can be logically split into as many different processes as you need. You can even have more processes than you have CPUs, but usually you try to match these two numbers together. The processes can all run quite different codes and they can run on CPUs that are geographically distant. For example you can have one process run in Bloomington and another one run in Gary.

The processes exchange data with one another by sending messages. A process can send a message in such a way that it doesn't care if the message has been received by anybody. So the messages that have been sent, don't need to be received. But usually they are and there are special function calls for receiving messages too.

The most commonly used library for message passing is called the Message Passing Interface . Two popular  freeware implementations of it exist, one developed by the Argonne National Laboratory, called MPICH , and another one developed by the Ohio Supercomputer Center , but currently maintained by the LAM organization and the LAM team at $\dots$ Indiana University .

MPI is huge. There are hundreds of functions in it for doing various things. We are going to study and use some of them in this course, with special emphasis on functions for doing parallel IO.

Because MPI is so flexible and universal, data parallel languages for clusters are often implemented on top of MPI, i.e., the compiler automatically converts a data parallel code to an MPI code. This is how the Portland Group HPF  compiler works and this is how the IBM HPF compiler for SP works too.

next up previous index
Next: Shared Memory Paradigm Up: Parallel Programming Previous: Data Parallel Paradigm
Zdzislaw Meglicki