MPI_ALLGATHER, MPI_Allgather Purpose Gathers individual messages from each task in comm and distributes the resulting message to each task. C synopsis #include int MPI_Allgather(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,MPI_Comm comm); C++ synopsis #include mpi.h void MPI::Comm::Allgather(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype) const; FORTRAN synopsis include 'mpif.h' or use mpi MPI_ALLGATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE, INTEGER COMM,INTEGER IERROR) Parameters sendbuf is the starting address of the send buffer (choice) (IN) sendcount is the number of elements in the send buffer (integer) (IN) sendtype is the datatype of the send buffer elements (handle) (IN) recvbuf is the address of the receive buffer (choice) (OUT) recvcount is the number of elements received from any task (integer) (IN) recvtype is the datatype of the receive buffer elements (handle) (IN) comm is the communicator (handle) (IN) IERROR is the FORTRAN return code. It is always the last argument. Description MPI_ALLGATHER is similar to MPI_GATHER except that all tasks receive the result instead of just the root. The block of data sent from task j is received by every task and placed in the jth block of the buffer recvbuf. The type signature associated with sendcount, sendtype at a task must be equal to the type signature associated with recvcount, recvtype at any other task. The "in place" option for intracommunicators is specified by passing the value MPI_IN_PLACE to sendbuf at all tasks. The sendcount and sendtype arguments are ignored. The input data of each task is assumed to be in the area where that task would receive its own contribution to the receive buffer. Specifically, the outcome of a call to MPI_ALLGATHER in the "in place" case is as if all tasks executed n calls to: MPI_GATHER(MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf, recvcount, recvtype, root, comm) for: root = 0, ..., n - 1. If comm is an intercommunicator, each task in group A contributes a data item. These items are concatenated and the result is stored at each task in group B. Conversely, the concatenation of the contributions of the tasks in group B is stored at each task in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa. MPI_IN_PLACE is not supported for intercommunicators. When you use this subroutine in a threads application, make sure all collective operations on a particular communicator occur in the same order at each task. See IBM Parallel Environment for AIX: MPI Programming Guide for more information on programming with MPI in a threads environment. Errors Fatal errors: Invalid communicator Invalid count(s) count < 0 Invalid datatype(s) Type not committed Unequal message lengths Invalid use of MPI_IN_PLACE MPI not initialized MPI already finalized Develop mode error if: Inconsistent message length Related information MPE_IALLGATHER MPI_ALLGATHER MPI_GATHER