MPI calculate tensor * matrix with tensor defined in global using Global Arrays library
I’m trying to use Global Arrays library with MPI in C++ as it allows sizeable variables being defined only once in public and meanwhile available to access by all mpi processes. So I created a little program which does the following math
MPI calculate tensor * matrix with tensor defined in global using Global Arrays library
I’m trying to use Global Arrays library with MPI in C++ as it allows sizeable variables being defined only once in public and meanwhile available to access by all mpi processes. So I created a little program which does the following math
Usage of MPI_Errhandler_free()
Section 9.3.5 of the MPI Standard documentation states:
Why would my value be wrong at the 4th decimal when I use MPI to message pass it between 2 nodes?
I am using MPI to message pass data values, some of them with a lot of decimal places, and I need them all to be accurate to around 6-8 decimal places. I have several implementations of my program (python, c++ with openmp, cuda) that I have run on several different systems and architectures, so I already know what the data values should be and that it is possible to achieve these exact values with different libraries on different architectures.
MPI_Spawn with every process freezing
I’m practicing with mpi exercises. In this one I must separate the static processes in two groups, odds and even. Then each group should create 2 processes. The thing is that if I only do that with one group (odd parents or even parents) it does work, but not with both of them. This is the fragment that is giving me trouble:
C code for sorting 4 byte integers in a file using MPI gives assertion failed error
Below, I have a C code that is supposed to take in a file with various 4-byte integers and sort them in ascending order using MPI functions:
MPI – One MPI_Send to multiple listeners
the thing is that I’m writting a programm in C using MPI. I the objective is to have N processes in the programme and once initiated one process send a token to a random different process, repeating this for 10 times. The first time is the rank 0 process the one that have to do it. This is an exercise and after expending several hours searching and trying different ways I try asking this here. I’ve tested, non blocking and MPI_Probe … But the main thing is why if try this with more than one processor(part of code that I attach) the program freezes, with two it runs ok.
MPI prime number function
I have some doubts about the following function that I have written in order to compute all the prime numbers in a specific interval. The resulting vector is not ordered but this is not a problem. I have some doubts about the consistency of the result between different rank, in particular about the casting part between ranks.
Split MPI AlltoAllV into smaller segments
Suppose every rank’s send
buffer is made up of smaller, not necessarily full subarrays of equal type
I use MPI in my section I always have the array always have garbage value
I have a problem with using scatter in mpi