MPI_GATHERV, MPI_Gatherv Purpose Collects individual messages from each task in comm at the root task. Messages can have different sizes and displacements. C synopsis #include int MPI_Gatherv(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcounts,int *displs,MPI_Datatype recvtype, int root,MPI_Comm comm); C++ synopsis #include mpi.h void MPI::Comm::Gatherv(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, const int recvcounts[], const int displs[], const MPI::Datatype& recvtype, int root) const; FORTRAN synopsis include 'mpif.h' or use mpi MPI_GATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*), INTEGER RECVTYPE,INTEGER ROOT,INTEGER COMM,INTEGER IERROR) Description This subroutine collects individual messages from each task in comm at the root task and stores them in rank order. With recvcounts as an array, messages can have varying sizes, and displs allows you the flexibility of where the data is placed on the root. The type signature of sendcount, sendtype on task i must be equal to the type signature of recvcounts[i], recvtype at the root. This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed. The following is information regarding MPI_GATHERV arguments and tasks: * On the task root, all arguments to the function are significant. * On other tasks, only the arguments. sendbuf, sendcount, sendtype, root, and comm are significant. * The argument root must be the same on all tasks. A call where the specification of sizes, types, and displacements causes any location on the root to be written more than once is erroneous. Parameters sendbuf is the starting address of the send buffer (choice) (IN) sendcount is the number of elements in the send buffer (integer) (IN) sendtype is the datatype of the send buffer elements (handle) (IN) recvbuf is the address of the receive buffer (choice, significant only at root) (OUT) recvcounts integer array (of length groupsize) that contains the number of elements received from each task (significant only at root) (IN) displs integer array (of length groupsize). Entry i specifies the displacement relative to recvbuf at which to place the incoming data from task i (significant only at root) (IN) recvtype is the datatype of the receive buffer elements (handle, significant only at root) (IN) root is the rank of the receiving task (integer) (IN) comm is the communicator (handle) (IN) IERROR is the FORTRAN return code. It is always the last argument. Notes Displacements are expressed as elements of type recvtype, not as bytes. The "in place" option for intracommunicators is specified by passing MPI_IN_PLACE as the value of sendbuf at the root. In such a case, sendcount and sendtype are ignored, and the contribution of the root to the gathered vector is assumed to be already in the correct place in the receive buffer. If comm is an intercommunicator, the call involves all tasks in the intercommunicator, but with one group (group A) defining the root task. All tasks in the other group (group B) pass the same value in root, which is the rank of the root in group A. The root passes the value MPI_ROOT in root. All other tasks in group A pass the value MPI_PROC_NULL in root. Data is gathered from all tasks in group B to the root. The send buffer arguments of the tasks in group B must be consistent with the receive buffer argument of the root. MPI_IN_PLACE is not supported for intercommunicators. When you use this subroutine in a threads application, make sure all collective operations on a particular communicator occur in the same order at each task. See IBM Parallel Environment for AIX: MPI Programming Guide for more information on programming with MPI in a threads environment. In the 64-bit library, this function uses a shared memory optimization among the tasks on a node. This optimization is discussed in the chapter "Using shared memory" of IBM Parallel Environment for AIX: MPI Programming Guide, and is enabled by default. This optimization is not available to 32-bit programs. Errors Fatal errors: Invalid communicator Invalid count(s) count < 0 Invalid datatype(s) Type not committed Invalid root For an intracommunicator: root < 0 or root >= groupsize For an intercommunicator: root < 0 and is neither MPI_ROOT nor MPI_PROC_NULL, or root >= groupsize of the remote group Unequal message lengths Invalid use of MPI_IN_PLACE MPI not initialized MPI already finalized Develop mode error if: Inconsistent root Related information MPE_IGATHER MPI_GATHER