Blocking P2P comms

Blocking point-to-point communication in OpenFOAM: The safest option for code correctness, but it might not be that efficient!

Lecture video


Loading content.

Module transcript

Continuing from the last module, we’ve asked two questions:

  • First, should there be a check for correct placement of the tyre? The answer for sure is absolutely; in all cases, at some point before attempting to move the vehicle, someone or something needs to check if the tyre was correctly mounted.

  • The next question was about Wether we’re allowed to carry out some other operations on the car before that check?


And here is where it gets interesting, because the answer actually depends on the specific next operation. Obviously, we shouldn’t start moving the car before checking the tyres, but, for example, we can start mounting other tyres before checking the tyre we already mounted.


I’m aware this over-simplifies the concepts of blocking and non-blocking comms, but the message we can take is that checking for received data is a must. But there are benefits in delaying it when we can afford to.


So, in OpenFOAM, to engage in blocking MPI comms, you construct your parallel streams with Pstream blocking; which decides which MPI call is executed when you send and receive messages.


In particular, using the blocking type binds into an async buffered blocking send. This simply means that the program blocks until it finishes copying the passed buffer. When exactly the content of the copy buffer will be sent cannot usually be easily deduced. But we know that the buffer can be used again after the send call returns.


In respect to the receiving process, it blocks until the received message is copied into the passed-in buffer, which means the sent data will be available on the receiving process as soon as the blocking receive returns.


OpenFOAM also provides another way to wrap blocking comms through Pstream::scheduled, which calls the standard MPI send function.

This leaves MPI with task of choosing what to do in terms of what exact blocking call to perform. The issue with this is that it gets implemented differently by different MPI implementations.

With OpenMPI, which I think is the most popular implementation of MPI standards, it’ll either do an async-buffered send, just like Pstream blocking, or fall back to a fully blocking synchronous operations.

The decision to go down either path depends on whether there is enough space in the send buffer to accommodate sent data and associated MPI metadata.


It’s worth noting that both blocking and scheduling comms have a high chance of suffering from dead-locking, which happens when processes wait for messages that never reach them. As a result of this situation, the program experiences a stagnation effect, much like an infinite loop.


To be precise, there are two situations where you might experience a deadlock during P2P blocking comms:

  • First, if a matching send or a receive is missing from your program, which always results in a deadlock for the process which ran the rogue MPI operations.

  • The other case, which is a bit more involved is when there is a send-receiving cycle where the order of operations is not correct.


To understand this, we put two processes in communication at the same time as an example. So, if both processes start a blocking send, and block, waiting for a receive to show up, then both will stagnate simply because they are both busy sending and no process is starting a receive.


A simple solution is to start a receive first on one of the processes, this way we can avoid the deadlock situation altogether.

Note that this actually depends on the MPI implementations, so not all implementations will suffer from a deadlock in this simple case.


For example, almost all implementations will carry out the communication just fine if both processes start a receive first. Also, you can see how this can become much harder to keep track of if you have hundreds of processes, doing the sending and the receiving at different order.


In terms of statistics, blocking and scheduled communications seem to be used similarly across OpenFOAM forks, except in Foam-Extend libraries where blocking comms are used extensively, the reason being they are safer for program correctness.

Also, considering that foam-extend code is slightly older than the other forks, we can see that there has been a shift towards non-blocking comms over time, probably because they are a little bit more performant.


Note that these numbers are not here to compare forks in terms of parallel performance simply because all these forks implement completely different functionality, in which relying on a specific type of communication is imposed by the nature of the feature they implement.

In the next module, we move on to non-blocking communications to see why they are preferred to the blocking ones.

Downloads

⇩ Lecture slides