next up previous index
Next: MPI Documentation and Literature Up: Introduction Previous: Introduction

The History of MPI

Various message passing environments were developed in early 80s. Some were developed for special purpose machines such as the Caltech's N-cube  (our Geoffrey Fox  was amongst the authors of that software), some were developed for networks of UNIX workstations, e.g., the Argonne's P4  library and PICL  and PVM  from Oak Ridge . The Ohio Supercomputer Center  developed a message passing library called LAM . There was a package developed specially for quantum chemistry called TCGMSG  and commercially available libraries like Express  that derived from the N-cube system.

By early 1992 it was clear that the authors of these numerous libraries duplicated each other's efforts and busied themselves re-inventing the wheel all the time. In late 1992 a meeting was called during the Supercomputing 92  conference and the attendants agreed to develop and then implement a common standard for message passing that would incorporate all the interesting ideas developed so far and build on them. And this is how MPI , the Message Passing Interface, was born.

Some message passing libraries, like ISIS  developed by Cornell University's  Ken Birman , didn't fit the somewhat stiff model proposed by the participants of the conference, and so they decided to go it alone in a quite different direction of fault-tolerant distributed computing systems that helped them make heaps of money (ISIS was deployed at major stock exchange operations around the world) and ended up eventually at the door of Microsoft , which incorporated ISIS technology into its clustering product called  Wolfpack .

ISIS was based on the insanely great idea of virtual synchrony, but  this idea wouldn't play in context of scientific computing, where synchronizing processes, even if virtually only, would carry a heavy performance price. At the same time, MPI was biased unashamedly towards supercomputing and the issues of fault tolerance and synchronization were not considered critical.

There were many industrial participants in the MPI club that helped finance the endeavour. Amongst them were Convex, Cray, IBM, Intel, Meiko, nCUBE, NEC and Thinking Machines. Some of these vendors bit the dust, but many continued to prosper and benefited from the development of MPI. IBM, especially, is in the latter group. IBM SP is an MPI machine. Today, all PC and UNIX workstation clusters are MPI systems too and MPI programs are run on the Earth Simulator, Cray X1, and large SMPs.

The first MPI standard, called MPI-1 was completed in May 1994. The second MPI standard, MPI-2, was completed in 1998. There was so much enthusiasm back in 1994 that first MPI-1 implementations were released only about a year later, the most popular ones being the Argonne's MPICH  (based on the P4  package and Chameleon  - hence the ``CH'' suffix) and Ohio  LAM MPI . There were still many supercomputer vendors around back then too, and they released their own implementations of MPI.

But by 1998, the time MPI-2 standard was formalized, much of this enthusiasm evaporated and the first implementation of this more advanced standard had to wait until November 2002. So MPI-2 is still very new. Yet, it is of special interest to us, because it is only in MPI-2 that parallel IO operations were defined. These operations were invented and implemented originally in a package called MPI-IO  that was developed for NASA  prior to MPI-2 standardization. MPI designers liked it so much that they incorporated all of it into the new MPI-2 standard.

There is only one freeware MPI-IO version that floats about, and that I know of, and it is called ROMIO . It was developed by the same people who developed the original MPICH and their younger colleagues. ROMIO can be combined with MPI-1, as an external MPI library or incorporated directly into MPI-2. It works much the same in both cases.

next up previous index
Next: MPI Documentation and Literature Up: Introduction Previous: Introduction
Zdzislaw Meglicki