MPI_SCATTERV, MPI_Scatterv Purpose Distributes individual messages from root to each task in comm. Messages can have different sizes and displacements. C synopsis #include int MPI_Scatterv(void* sendbuf,int *sendcounts,int *displs, MPI_Datatype sendtype,void* recvbuf,int recvcount, MPI_Datatype recvtype,int root,MPI_Comm comm); C++ synopsis #include mpi.h void MPI::Comm::Scatterv(const void* sendbuf, const int sendcounts[], const int displs[], const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype, int root) const; FORTRAN synopsis include 'mpif.h' or use mpi MPI_SCATTERV(CHOICE SENDBUF,INTEGER SENDCOUNTS(*),INTEGER DISPLS(*),INTEGER SENDTYPE, CHOICE RECVBUF,INTEGER RECVCOUNT,INTEGER RECVTYPE,INTEGER ROOT,INTEGER COMM, INTEGER IERROR) Description This subroutine distributes individual messages from root to each task in comm. Messages can have different sizes and displacements. With sendcounts as an array, messages can have varying sizes of data that can be sent to each task. displs allows you the flexibility of where the data can be taken from on the root. The type signature of sendcount[i], sendtype at the root must be equal to the type signature of recvcount, recvtype at task i. (The type maps can be different.) This means the amount of data sent must be equal to the amount of data received, pairwise between each task and the root. Distinct type maps between sender and receiver are allowed. The following is information regarding MPI_SCATTERV arguments and tasks: * On the task root, all arguments to the function are significant. * On other tasks, only the arguments recvbuf, recvcount, recvtype, root, and comm are significant. * The argument root must be the same on all tasks. A call where the specification of sizes, types, and displacements causes any location on the root to be read more than once is erroneous. The "in place" option for intracommunicators is specified by passing MPI_IN_PLACE as the value of recvbuf at the root. In such a case, recvcount and recvtype are ignored, and root "sends" no data to itself. The scattered vector is still assumed to contain n segments, where n is the group size. the rootth segment, which root should "send to itself," is not moved. If comm is an intercommunicator, the call involves all tasks in the intercommunicator, but with one group (group A) defining the root task. All tasks in the other group (group B) pass the same value in root, which is the rank of the root in group A. The root passes the value MPI_ROOT in root. All other tasks in group A pass the value MPI_PROC_NULL in root. Data is scattered from the root to all tasks in group B. The receive buffer arguments of the tasks in group B must be consistent with the send buffer argument of the root. MPI_IN_PLACE is not supported for intercommunicators. When you use this subroutine in a threads application, make sure all collective operations on a particular communicator occur in the same order at each task. See IBM Parallel Environment for AIX: MPI Programming Guide for more information on programming with MPI in a threads environment. Parameters sendbuf is the address of the send buffer (choice, significant only at root) (IN) sendcounts integer array (of length groupsize) that contains the number of elements to send to each task (significant only at root) (IN) displs integer array (of length groupsize). Entry i specifies the displacement relative to sendbuf from which to send the outgoing data to task i (significant only at root) (IN) sendtype is the datatype of the send buffer elements (handle, significant only at root) (IN) recvbuf is the address of the receive buffer (choice) (OUT) recvcount is the number of elements in the receive buffer (integer) (IN) recvtype is the datatype of the receive buffer elements (handle) (IN) root is the rank of the sending task (integer) (IN) comm is the communicator (handle) (IN) IERROR is the FORTRAN return code. It is always the last argument. Notes In the 64-bit library, this function uses a shared memory optimization among the tasks on a node. This optimization is discussed in the chapter "Using shared memory" of IBM Parallel Environment for AIX: MPI Programming Guide, and is enabled by default. This optimization is not available to 32-bit programs. Errors Fatal errors: Invalid communicator Invalid count(s) count < 0 Invalid datatype(s) Type not committed Invalid root For an intracommunicator: root < 0 or root >= groupsize For an intercommunicator: root < 0 and is neither MPI_ROOT nor MPI_PROC_NULL, or root >= groupsize of the remote group Unequal message lengths Invalid use of MPI_IN_PLACE MPI not initialized MPI already finalized Develop mode error if: Inconsistent root Related information MPI_GATHER MPI_SCATTER