MPI_ALLTOALL, MPI_Alltoall Purpose Sends a distinct message from each task to every task. C synopsis #include int MPI_Alltoall(void* sendbuf,int sendcount,MPI_Datatype sendtype, void* recvbuf,int recvcount,MPI_Datatype recvtype, MPI_Comm comm): C++ synopsis #include mpi.h void MPI::Comm::Alltoall(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype) const; FORTRAN synopsis include 'mpif.h' or use mpi MPI_ALLTOALL(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE, INTEGER COMM,INTEGER IERROR Parameters sendbuf is the starting address of the send buffer (choice) (IN) sendcount is the number of elements sent to each task (integer) (IN) sendtype is the datatype of the send buffer elements (handle) (IN) recvbuf is the address of the receive buffer (choice) (OUT) recvcount is the number of elements received from any task (integer) (IN) recvtype is the datatype of the receive buffer elements (handle) (IN) comm is the communicator (handle) (IN) IERROR is the FORTRAN return code. It is always the last argument. Description MPI_ALLTOALL sends a distinct message from each task to every task. The jth block of data sent from task i is received by task j and placed in the ith block of the buffer recvbuf. The type signature associated with sendcount, sendtype, at a task must be equal to the type signature associated with recvcount, recvtype at any other task. This means the amount of data sent must be equal to the amount of data received, pair wise between every pair of tasks. The type maps can be different. All arguments on all tasks are significant. MPI_ALLTOALL does not support MPI_IN_PLACE on either type of communicator. If comm is an intercommunicator, the outcome is as if each task in group A sends a message to each task in group B, and vice versa. The jth send buffer of task i in group A should be consistent with the ith receive buffer of task j in group B, and vice versa. When MPI_ALLTOALL is executed on an intercommunicator, the number of data items sent from tasks in group A to tasks in group B does not need to be equal to the number of items sent in the reverse direction. In particular, you can have unidirectional communication by specifying sendcount = 0 in the reverse direction. When you use this subroutine in a threads application, make sure all collective operations on a particular communicator occur in the same order at each task. See IBM Parallel Environment for AIX: MPI Programming Guide for more information on programming with MPI in a threads environment. Errors Fatal errors: Unequal lengths Invalid count(s) count < 0 Invalid datatype(s) Type not committed Invalid communicator Unequal message lengths Invalid use of MPI_IN_PLACE MPI not initialized MPI already finalized Develop mode error if: Inconsistent message lengths Related information MPE_IALLTOALL MPI_ALLTOALLV