MPI General CC
MPI General CC
OpenMPI,DemoVersion CollectiveCommunication
Student: Phung Dinh Vu Vu Minh Thanh Lecture: Nguyen Huu Dung Class: Information Technology - CTEST
Evirontment
MPICH/MPICH2 MPI
Free Software, support Unix, Linux, Mac OS, Windows Not popular, No much document....
One of the predecessors of the Open MPI project. LAM is an MPI programming environment and development system. Cluster Friendly, Grid Capable Use TCP/IP, shared memory, Myrinet (GM), or Infiniband (mVAPI) Support languages: C, C++, and Fortran
12/10/11
Installation LAM-MPI
Install gfortran: sudoaptgetinstallgfortran Download source LAM-MPI & extract using tar Setenv: CC=cc ; CXX=CC; FC=gfortran
./configureprefix=/usr/local/lammpiwithoutfc .....manyoutputhere..... sudomake .....manyoutputhere..... sudomakeinstall .....manyoutputhere..... Add$PATHenv:PATH=$PATH:/usr/local/lammpi
Parallel Computing: OpenMPI 3
12/10/11
Best way is using Vitualbox Cluster Network has 2 nodes (can more)
Using bridge connection between 2 machines. SSH (Secure Socket Shell) run as deamon Using public key (RSA) for authentication
Parallel Computing: OpenMPI 4
12/10/11
sudoaptgetinstallbridgeutils sudovim/etc/network/interfaces
sudovim/etc/hosts
192.168.1.108machine001 192.168.1.108machine002
Parallel Computing: OpenMPI 5
12/10/11
Enter passphase
On machine001:
On machine002:
12/10/11
machine001 machine002
12/10/11
12/10/11
LAM-MPI Hello_World
#include <stdio.h> #include <mpi.h> int main (int argc, char * argv[]){ int rank,size; MPI_Init(&argc,&argv); /* starts MPI */ MPI_Comm_rank(MPI_COMM_WORLD,&rank); /* get current process id */ MPI_Comm_size(MPI_COMM_WORLD,&size); /* get number of processes */ printf("Hello world from process %d of %d\n",rank,size); MPI_Finalize(); return 0; }
Parallel Computing: OpenMPI 9
12/10/11
LAM-MPI Hello_World
12/10/11
10
Collective Communication
Communication involes a group or groups of processes. There is a process called root. This process manage other proceses. Demo using:
12/10/11
MPI_Bcast
MPI_Bcast (void *buf, int count, MPI_Datatype dtype, int root, MPI_Comm comm); *buf is data of root process. After broadcast operation, all processes will get the same variables's value as root process.
12/10/11
12
MPI_SCATTER
MPI_Scatter (void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm); Divide *sendbuf into N equal partitions (N = number of proceses), send to each process by order of rank.
12/10/11
13
MPI_Reduce
MPI_Reduce (void *sendbuf, void *recvbuf, int count, MPI_Datatype dtype, MPI_Op op, int root, MPI_Comm comm); After the reduction, all processes have sent count elements from their send buffer to the root process. MPI_Op op calculate based on these elements
12/10/11
14
MPI_Reduce
MPI_MAX / MPI_MIN MPI_SUM MPI_PROD MPI_LAND MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR
Parallel Computing: OpenMPI 15
12/10/11
MPI_Gather
MPI_Gather (void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm); MPI_Gather is a complete reverse of the description of MPI_Scatter(). After gathering, root process has full data of each process in group.
12/10/11
16
Demo - Problem
Problem:
Input:
Output: Min & Max element of that array 2 machines RAM: 512MB, HDD: 8G Bridge connection
Parallel Computing: OpenMPI 17
12/10/11
Demo-Solution
MAX
12/10/11
18
Demo-Result
n\N 100 1000 10000 20000 50000 1 10 (TU) 12 680 2720 17320 2 13 (TU) 15 190 710 4380 3 30 (TU) 31 110 340 1950 4 35 (TU) 38 80 210 1130 6 37 (TU) 40 68 120 540 8 50 (TU) 51 60 90 340
n= Number of elements in input array N= Number of used threads Time Unit (TU) : Clock Tick/1000
12/10/11 Parallel Computing: OpenMPI 19
Demo-Execution
12/10/11
20
Conclusion
LAM-MPI(now openMPI) is good evironment for Message Passing Interface. Configurations to use LAM-MPI isn't easy. Collective Communication supports processes in group can communicate each other, exchange messages,.... under root process control. Parallel programming takes effect when the size of input is big.
Parallel Computing: OpenMPI 21
12/10/11