This document discusses collective communications in MPI (Message Passing Interface), including barriers, broadcast, gather/scatter, and reduction operations. It provides examples of how to use MPI_Barrier to synchronize processes, MPI_Bcast for one-to-all communication from a root process, MPI_Scatter and MPI_Gather for distributing and collecting data across processes, and a scatter example program. Collective operations require all processes in a communicator to call the routine and have no non-blocking versions.