next up previous index
Next: Writing on MPI Files Up: MPI and MPI-IO Previous: Handling MPI Errors


MPI-IO was developed in 1994 in the  IBM's Watson Laboratory in order to provide parallel I/O support for MPI. NASA adopted MPI-IO for its own research computing projects in 1996 and in the same year MPI Forum decided to incorporate MPI-IO in MPI-2 . And so, when MPI-2 was published in 1997, MPI-IO was already in it.

The reason why NASA and MPI Forum embraced MPI-IO so quickly was because it was really very nice - especially if you looked at it from MPI. All MPI-IO function calls are very reminiscent of MPI calls and very much in the spirit of MPI too. Writing MPI files is similar to sending MPI messages and reading MPI files is similar to receiving MPI messages. Furthermore MPI-IO fully embraces the versatility and flexibility of MPI data types - and then takes this concept one step further in defining the so called MPI file views.

Sending and receiving messages can be blocking or non-blocking. In this course, which is introductory, we haven't worked with non-blocking sends and receives, because they are quite difficult to use. But if you want to optimize your parallel program and expect a communication bottleneck, it may help to use non-blocking communications. The general idea here is that you can start a message send, and then immediately return to computations, while the message is being sent in the background. Similarly you can keep computing while receiving a message in the background. This is going to work best on systems where there are processors that are dedicated to IO and do not in general participate in computations. The IBM BlueGene/L  is an example of such a machine.

MPI-IO also lets you write and read files in a normal, i.e., blocking mode, and then in the non-blocking mode - asynchronously - so that you can carry out computations, while the file is being read or written in the background.

MPI-IO supports the concept of collective operations too. Processes can access MPI files each on its own, or all together at the same time. The latter allows for read and write optimizations that can be implemented on various levels.

MPI-IO semantics are so nice that people use MPI-IO even to write normal sequential files associated with individual processes.

In this section we are going to explore MPI-IO beginning with simple parallel writes and reads and then gradually moving to more complex features.

next up previous index
Next: Writing on MPI Files Up: MPI and MPI-IO Previous: Handling MPI Errors
Zdzislaw Meglicki