MPI_ALLGATHERV, MPI_Allgatherv Purpose Collects individual messages from each task in comm and distributes the resulting message to all tasks. Messages can have different sizes and displacements. C synopsis #include int MPI_Allgatherv(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int *recvcounts,int *displs,MPI_Datatype recvtype, MPI_Comm comm); C++ synopsis #include mpi.h void MPI::Comm::Allgatherv(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, const int recvcounts[], const int displs[], const MPI::Datatype& recvtype) const; FORTRAN synopsis include 'mpif.h' or use mpi MPI_ALLGATHERV(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNTS(*),INTEGER DISPLS(*), INTEGER RECVTYPE,INTEGER COMM,INTEGER IERROR) Parameters sendbuf is the starting address of the send buffer (choice) (IN) sendcount is the number of elements in the send buffer (integer) (IN) sendtype is the datatype of the send buffer elements (handle) (IN) recvbuf is the address of the receive buffer (choice) (OUT) recvcounts integer array (of length groupsize) that contains the number of elements received from each task (IN) displs integer array (of length groupsize). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from task i (IN) recvtype is the datatype of the receive buffer elements (handle) (IN) comm is the communictor (handle) (IN) IERROR is the FORTRAN return code. It is always the last argument. Description This subroutine collects individual messages from each task in comm and distributes the resulting message to all tasks. Messages can have different sizes and displacements. The block of data sent from task j is recvcounts[j] elements long, and is received by every task and placed in recvbuf at offset displs[j]. The type signature associated with sendcount, sendtype at task j must be equal to the type signature of recvcounts[j], recvtype at any other task. The "in place" option for intracommunicators is specified by passing the value MPI_IN_PLACE to sendbuf at all tasks. The sendcount and sendtype arguments are ignored. The input data of each task is assumed to be in the area where that task would receive its own contribution to the receive buffer. Specifically, the outcome of a call to MPI_ALLGATHERV in the "in place" case is as if all tasks executed n calls to: MPI_GATHERV(MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf, recvcount, recvtype, root, comm) for: root = 0, ..., n - 1. If comm is an intercommunicator, each task in group A contributes a data item. These items are concatenated and the result is stored at each task in group B. Conversely, the concatenation of the contributions of the tasks in group B is stored at each task in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa. MPI_IN_PLACE is not supported for intercommunicators. When you use this subroutine in a threads application, make sure all collective operations on a particular communicator occur in the same order at each task. See IBM Parallel Environment for AIX: MPI Programming Guide for more information on programming with MPI in a threads environment. Errors Fatal errors: Invalid communicator Invalid count(s) count < 0 Invalid datatype(s) Type not committed Unequal message lengths Invalid use of MPI_IN_PLACE MPI not initialized MPI already finalized Develop mode error if: None Related information MPE_IALLGATHERV MPI_ALLGATHER