MPI notes mpirun will work for any value of -np and the computer will create, say, 24 proceses, but if you only have 8, it will divide up the work sending and receiving in MPI send and recieve can only work when sending is finished for example 0->1->2 if 0 recieves from null and 2 sends to null the way it works is 0 sends to 1 1 sends to 2 2 sends to null since 2 has sent its message, it is "complete" 2 can now recieve message from 1 1 can recieve from 0 and 0 can recieve from null IF WE HAVE A RING TOPOLOGY 0->1->2->0 the process will hang since the send cycle never ends this is called DEADLOCKING a nice analogy is voice mail and delivering messages, if we buffer, this buffer can be full DEADLOCKING occurs when we have closed loops mpi_isend(...,request,...) sends a request and immediately begins work again mpi_irecv(...,request,...) recieves a request and immediately begins work again mpi_wait(request) tells the processor to wait (blocking) for the job to be complete i.e. mpi_isend(request) do so some work mpi_wait(request) //now wait Suppose we wanted to have a cartesian topology of processors, for example 678 345 012 where each number represents a processor and each processor talks to its neighbour, i.e. processor 0 talks to 1 and 3, processor 4 talks to 1,3,5, and 7. This can be achieved by writing a new comunicator. Let us now walk through the few basic commands MPI_Dims_create is a command that tells us the best way to break up the processors. For example, if we had 12 processors, we might divide them up into a 3x4 rectangle, for example 89AB 4567 0123 MPI_cart_create allows for the creation of cartesian topologies in arbitrary dimensions MPI_Type_Vector MPI_Type_commit these commands are useful for creating our own data types, specifically if we want to spend subarrays within an array