MPI_SCATTER, MPI_Scatter Purpose Distributes individual messages from root to each task in comm. C synopsis #include int MPI_Scatter(void* sendbuf,int sendcount,MPI_Datatype sendtype,void* recvbuf, int recvcount,MPI_Datatype recvtype,int root,MPI_Comm comm); C++ synopsis #include mpi.h void MPI::Comm::Scatter(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype, int root) const; FORTRAN synopsis include 'mpif.h' or use mpi MPI_SCATTER(CHOICE SENDBUF,INTEGER SENDCOUNT,INTEGER SENDTYPE,CHOICE RECVBUF,INTEGER RECVCOUNT, INTEGER RECVTYPE,INTEGER ROOT,INTEGER COMM,INTEGER IERROR) Description MPI_SCATTER distributes individual messages from root to each task in comm. This subroutine is the inverse operation to MPI_GATHER. The type signature associated with sendcount, sendtype at the root must be equal to the type signature associated with recvcount, recvtype at all tasks. (Type maps can be different.) This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed. The following is information regarding MPI_SCATTER arguments and tasks: * On the task root, all arguments to the function are significant. * On other tasks, only the arguments recvbuf, recvcount, recvtype, root, and comm are significant. * The argument root must be the same on all tasks. A call where the specification of counts and types causes any location on the root to be read more than once is erroneous. The "in place" option for intracommunicators is specified by passing MPI_IN_PLACE as the value of recvbuf at the root. In such a case, recvcount and recvtype are ignored, and root "sends" no data to itself. The scattered vector is still assumed to contain n segments, where n is the group size. the rootth segment, which root should "send to itself," is not moved. If comm is an intercommunicator, the call involves all tasks in the intercommunicator, but with one group (group A) defining the root task. All tasks in the other group (group B) pass the same value in root, which is the rank of the root in group A. The root passes the value MPI_ROOT in root. All other tasks in group A pass the value MPI_PROC_NULL in root. Data is scattered from the root to all tasks in group B. The receive buffer arguments of the tasks in group B must be consistent with the send buffer argument of the root. MPI_IN_PLACE is not supported for intercommunicators. When you use this subroutine in a threads application, make sure all collective operations on a particular communicator occur in the same order at each task. See IBM Parallel Environment for AIX: MPI Programming Guide for more information on programming with MPI in a threads environment. Parameters sendbuf is the address of the send buffer (choice, significant only at root) (IN) sendcount is the number of elements to be sent to each task (integer, significant only at root) (IN) sendtype is the datatype of the send buffer elements (handle, significant only at root) (IN) recvbuf is the address of the receive buffer (choice) (OUT) recvcount is the number of elements in the receive buffer (integer) (IN) recvtype is the datatype of the receive buffer elements (handle) (IN) root is the rank of the sending task (integer) (IN) comm is the communicator (handle) (IN) IERROR is the FORTRAN return code. It is always the last argument. Notes In the 64-bit library, this function uses a shared memory optimization among the tasks on a node. This optimization is discussed in the chapter "Using shared memory" of IBM Parallel Environment for AIX: MPI Programming Guide, and is enabled by default. This optimization is not available to 32-bit programs. Errors Fatal errors: Invalid communicator Invalid count(s) count < 0 Invalid datatype(s) Type not committed Invalid root For an intracommunicator: root < 0 or root >= groupsize For an intercommunicator: root < 0 and is neither MPI_ROOT nor MPI_PROC_NULL, or root >= groupsize of the remote group Unequal message lengths Invalid use of MPI_IN_PLACE MPI not initialized MPI already finalized Develop mode error if: Inconsistent root Inconsistent message length Related information MPE_ISCATTER MPI_GATHER MPI_SCATTER