MPI_ALLTOALLV, MPI_Alltoallv Purpose Sends a distinct message from each task to every task. Messages can have different sizes and displacements. C synopsis #include int MPI_Alltoallv(void* sendbuf,int *sendcounts,int *sdispls, MPI_Datatype sendtype,void* recvbuf,int *recvcounts,int *rdispls, MPI_Datatype recvtype,MPI_Comm comm); C++ synopsis #include mpi.h void MPI::Comm::Alltoallv(const void* sendbuf, const int sendcounts[], const int sdispls[], const MPI::Datatype& sendtype, void* recvbuf, const int recvcounts[], const int rdispls[], const MPI::Datatype& recvtype) const; FORTRAN synopsis include 'mpif.h' or use mpi MPI_ALLTOALLV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*), INTEGER SDISPLS(*),INTEGER SENDTYPE,CHOICE RECVBUF, INTEGER RECVCOUNTS(*),INTEGER RDISPLS(*),INTEGER RECVTYPE, INTEGER COMM,INTEGER IERROR) Parameters sendbuf is the starting address of the send buffer (choice) (IN) sendcounts integer array (of length groupsize) specifying the number of elements to send to each task (IN) sdispls integer array (of length groupsize). Entry j specifies the displacement relative to sendbuf from which to take the outgoing data destined for task j. (IN) sendtype is the datatype of the send buffer elements (handle) (IN) recvbuf is the address of the receive buffer (choice) (OUT) recvcounts integer array (of length groupsize) specifying the number of elements to be received from each task (IN) rdispls integer array (of length groupsize). Entry i specifies the displacement relative to recvbuf at which to place the incoming data from task i. (IN) recvtype is the datatype of the receive buffer elements (handle) (IN) comm is the communicator (handle) (IN) IERROR is the FORTRAN return code. It is always the last argument. Description MPI_ALLTOALLV sends a distinct message from each task to every task. Messages can have different sizes and displacements. This subroutine is similar to MPI_ALLTOALL with the following differences. MPI_ALLTOALLV allows you the flexibility to specify the location of the data for the send with sdispls and the location of where the data will be placed on the receive with rdispls. The block of data sent from task i is sendcounts[j] elements long, and is received by task j and placed in recvbuf at offset rdispls[i]. These blocks do not have to be the same size. The type signature associated with sendcount[j], sendtype at task i must be equal to the type signature associated with recvcounts[i], recvtype at task j. This means the amount of data sent must be equal to the amount of data received, pair wise between every pair of tasks. Distinct type maps between sender and receiver are allowed. All arguments on all tasks are significant. MPI_ALLTOALLV does not support MPI_IN_PLACE on either type of communicator. If comm is an intercommunicator, the outcome is as if each task in group A sends a message to each task in group B, and vice versa. The jth send buffer of task i in group A should be consistent with the ith receive buffer of task j in group B, and vice versa. When you use this subroutine in a threads application, make sure all collective operations on a particular communicator occur in the same order at each task. See IBM Parallel Environment for AIX: MPI Programming Guide for more information on programming with MPI in a threads environment. Errors Fatal errors: Invalid count(s) count < 0 Invalid datatype(s) Type not committed Invalid communicator A send and receive have unequal message lengths Invalid use of MPI_IN_PLACE MPI not initialized MPI already finalized Related information MPE_IALLTOALLV MPI_ALLTOALL