MPI_GATHER, MPI_Gather Purpose Collects individual messages from each task in comm at the root task. C synopsis #include int MPI_Gather(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype,int root, MPI_Comm comm); C++ synopsis #include mpi.h void MPI::Comm::Gather(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype, int root) const; FORTRAN synopsis include 'mpif.h' or use mpi MPI_GATHER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT, INTEGER COMM,INTEGER IERROR) Description This subroutine collects individual messages from each task in comm at the root task and stores them in rank order. The type signature of sendcount, sendtype on task i must be equal to the type signature of recvcount, recvtype at the root. This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed. The following is information regarding MPI_GATHER arguments and tasks: * On the task root, all arguments to the function are significant. * On other tasks, only the arguments sendbuf, sendcount, sendtype, root, and comm are significant. * The argument root must be the same on all tasks. Note that the argument revcount at the root indicates the number of items it receives from each task. It is not the total number of items received. A call where the specification of counts and types causes any location on the root to be written more than once is erroneous. The "in place" option for intracommunicators is specified by passing MPI_IN_PLACE as the value of sendbuf at the root. In such a case, sendcount and sendtype are ignored, and the contribution of the root to the gathered vector is assumed to be already in the correct place in the receive buffer. If comm is an intercommunicator, the call involves all tasks in the intercommunicator, but with one group (group A) defining the root task. All tasks in the other group (group B) pass the same value in root, which is the rank of the root in group A. The root passes the value MPI_ROOT in root. All other tasks in group A pass the value MPI_PROC_NULL in root. Data is gathered from all tasks in group B to the root. The send buffer arguments of the tasks in group B must be consistent with the receive buffer argument of the root. MPI_IN_PLACE is not supported for intercommunicators. When you use this subroutine in a threads application, make sure all collective operations on a particular communicator occur in the same order at each task. See IBM Parallel Environment for AIX: MPI Programming Guide for more information on programming with MPI in a threads environment. Parameters sendbuf is the starting address of the send buffer (choice) (IN) sendcount is the number of elements in the send buffer (integer) (IN) sendtype is the datatype of the send buffer elements (handle) (IN) recvbuf is the address of the receive buffer (choice, significant only at root) (OUT) recvcount is the number of elements for any single receive (integer, significant only at root) (IN) recvtype is the datatype of the receive buffer elements (handle, significant only at root) (IN) root is the rank of the receiving task (integer) (IN) comm is the communicator (handle) (IN) IERROR is the FORTRAN return code. It is always the last argument. Notes In the 64-bit library, this function uses a shared memory optimization among the tasks on a node. This optimization is discussed in the chapter "Using shared memory" of IBM Parallel Environment for AIX: MPI Programming Guide, and is enabled by default. This optimization is not available to 32-bit programs. Errors Fatal errors: Invalid communicator Invalid count(s) count < 0 Invalid datatype(s) Type not committed Invalid root For an intracommunicator: root < 0 or root >= groupsize For an intercommunicator: root < 0 and is neither MPI_ROOT nor MPI_PROC_NULL, or root >= groupsize of the remote group Unequal message lengths Invalid use of MPI_IN_PLACE MPI not initialized MPI already finalized Develop mode error if: Inconsistent root Inconsistent message length Related information MPE_IGATHER MPI_ALLGATHER MPI_GATHER MPI_SCATTER