The other MPI type constructor we have already seen used is
`MPI_Type_vector`

. We used this constructor in the program
that exchanged columns of a matrix between processes using
`MPI_Sendrecv`

. In C columns are not laid out contiguously
(rows are), so we had to call `MPI_Type_vector`

in order
to tell the program how to pick data from the matrix so that
we would end up transferring the whole column in one go.

In Fortran we would have to do this for matrix rows, because in Fortran columns are laid contiguously and rows are not.

The synopsis of this function in C is:

and in Fortran:MPI_Type_vector(int count, int blocklength, int stride, MPI_Datatype oldtype, MPI_Datatype *newtype)

The function picks upmpi_type_vector(count, blocklength, stride, oldtype, newtype, ierror) integer count, blocklength, stride, oldtype, newtype, ierror

`count`

blocks of data of type
`MPI_Datatype`

. Each block is `blocklength`

data items long.
The separation between the beginning of one block and the beginning
of the next one is `stride`

. The newly constructed data type
is now associated with the memory location pointed to by `newtype`

,
which has been structured to store all information about this datatype.
Within each block data items of type `oldtype`

are laid out
contiguously.
For example, if the old type is:

then the call tooldtype = {(double, 0), (char, 8)}

will create a new MPI data type, which is going to have the following map:MPI_Type_vector(2, 3, 4, named_double, &six_named_doubles)

In plain language: we are taking two blocks of data. Each block comprises 3 structures of typenewtype = {(double, 0), (char, 8), (double, 16), (char, 24), (double, 32), (char, 40), (double, 64), (char, 72), (double, 80), (char, 88), (double, 96), (char, 104)}

`named_double`

(the map of which is
`{(double, 0), (char, 8)}`

) concatenated
contiguously. The stride is set to the `named_double`

is 16, the stride is 64 bytes because
.
Function `MPI_Type_contiguous`

can be thought of as a special
case of `MPI_Type_vector`

:

This means that if you ever have to write your own MPI, you can begin by defininingMPI_Type_contiguous(count, oldtype, &newtype) = MPI_Type_vector(count, 1, 1, oldtype, &newtype)

`MPI_Type_vector`

and then write
`MPI_Type_contiguous`

as a simple wrapper around the former.
But then it may be also the case that you can capitalize on
some hardware features and write a faster
implementation of `MPI_Type_contiguous`

directly.
In `MPI_Type_vector`

the stride is defined in terms of
an extent of the basic data type used in the operation. There is
a special variant of this function that lets you define
stride simply in bytes, if you know what they are.
This function is called `MPI_Type_hvector`

and
its synopsis in C is:

The Fortran synopsis of this function is:MPI_Type_hvector(int count, int blocklength, MPI_Aint stride, MPI_Datatype oldtype, MPI_Datatype *newtype)

In term ofmpi_type_hvector(count, blocklength, stride, oldtype, newtype, ierror) integer count, blocklength, stride, oldtype, newtype, ierror

`MPI_Type_hvector`

the previous example
would be written as follows:MPI_Type_vector(2, 3, 4, named_double, &six_named_doubles)

MPI_Type_hvector(2, 3, 64, named_double, &six_named_doubles)