What is the best practice for protecting a shared communicator in a MPI_THREAD_MULTIPLE context?
I am writing some code on top of an existing library which uses MPI_THREAD_SERIALIZED
internally. However, I have a need to use MPI_THREAD_MULTIPLE
. For ease of presentation, let’s say each process needs to compute with two threads. Each thread has it’s own duplicate of MPI_COMM_WORLD
so that they can use some non-shared objects concurrently. This works totally fine.
Problem with sending vectors of different sizes in MPI C++, each of the receiving vectors keeps being 0 in size
The task sounds simple: “In each process (except 0) you have a set of 1 to 5 numbers. Send each set to main processor with MPI_Bsend, and print those sets in main process in ascending order of the ranks of the sending processes. To get the size of sent sets use MPI_Get_count”.
However, I’m doing something wrong. The program shows an error like this for each process from 1 to 7: