[go: up one dir, main page]

0% found this document useful (0 votes)
46 views21 pages

MPI General CC

1) The document discusses using LAM-MPI (Local Area Multicomputer - Message Passing Interface) for parallel computing. 2) It describes how to install LAM-MPI, create a virtual cluster network with two nodes, and set up SSH authentication between the nodes. 3) Collective communications like MPI_Bcast, MPI_Scatter, MPI_Reduce, and MPI_Gather are demonstrated to solve a problem of finding the min and max of an integer array in parallel.

Uploaded by

Mạnh Tiến
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views21 pages

MPI General CC

1) The document discusses using LAM-MPI (Local Area Multicomputer - Message Passing Interface) for parallel computing. 2) It describes how to install LAM-MPI, create a virtual cluster network with two nodes, and set up SSH authentication between the nodes. 3) Collective communications like MPI_Bcast, MPI_Scatter, MPI_Reduce, and MPI_Gather are demonstrated to solve a problem of finding the min and max of an integer array in parallel.

Uploaded by

Mạnh Tiến
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Parallel Computing

OpenMPI,DemoVersion CollectiveCommunication
Student: Phung Dinh Vu Vu Minh Thanh Lecture: Nguyen Huu Dung Class: Information Technology - CTEST

Evirontment

MPICH/MPICH2 MPI

Free Software, support Unix, Linux, Mac OS, Windows Not popular, No much document....

Using LAM-MPI (Local Area Multicomputer - Message Passing Interface)

One of the predecessors of the Open MPI project. LAM is an MPI programming environment and development system. Cluster Friendly, Grid Capable Use TCP/IP, shared memory, Myrinet (GM), or Infiniband (mVAPI) Support languages: C, C++, and Fortran

12/10/11

Parallel Computing: OpenMPI

Installation LAM-MPI

Install gfortran: sudoaptgetinstallgfortran Download source LAM-MPI & extract using tar Setenv: CC=cc ; CXX=CC; FC=gfortran
./configureprefix=/usr/local/lammpiwithoutfc .....manyoutputhere..... sudomake .....manyoutputhere..... sudomakeinstall .....manyoutputhere..... Add$PATHenv:PATH=$PATH:/usr/local/lammpi
Parallel Computing: OpenMPI 3

12/10/11

Create Cluster Network

Best way is using Vitualbox Cluster Network has 2 nodes (can more)

Machine001 : phungdinhvu@192.168.1.108 Machine002 : phungdinhvu@192.168.1.118 RAM: 512 MB + HDD: 8G

Using bridge connection between 2 machines. SSH (Secure Socket Shell) run as deamon Using public key (RSA) for authentication
Parallel Computing: OpenMPI 4

12/10/11

How to create Bridge Connetion

sudoaptgetinstallbridgeutils sudovim/etc/network/interfaces

autoeth0 ifaceeth0inetmanual autobr0 ifacebr0inetstatic

Adress192.168.1.108 Network192.168.1.0 Gateway192.168.1.1 Bridge_portseth0

sudovim/etc/hosts

192.168.1.108machine001 192.168.1.108machine002
Parallel Computing: OpenMPI 5

12/10/11

How to SSH authentication

sudo apt-get install ssh cd ~/.ssh ssh-keygen -t rsa

Enter passphase

Do the same for both machines

Machine001: 192.168.1.108 Machine002: 192.168.1.118

On machine001:

cat id_rsa.pub | ssh phungdinhvu@machine001 cat >> ~/.ssh/authorized_keys

On machine002:

cat id_rsa.pub | ssh phungdinhvu@machine001 cat >> ~/.ssh/authorized_keys

12/10/11

Parallel Computing: OpenMPI

How to start/stop with LAM-MPI


touch ~/lamhosts vim ~/lamhosts


machine001 machine002

To start: lamboot -v ~/lamhosts To stop: lamhalt

12/10/11

Parallel Computing: OpenMPI

How to start/stop with LAM-MPI

12/10/11

Parallel Computing: OpenMPI

LAM-MPI Hello_World

#include <stdio.h> #include <mpi.h> int main (int argc, char * argv[]){ int rank,size; MPI_Init(&argc,&argv); /* starts MPI */ MPI_Comm_rank(MPI_COMM_WORLD,&rank); /* get current process id */ MPI_Comm_size(MPI_COMM_WORLD,&size); /* get number of processes */ printf("Hello world from process %d of %d\n",rank,size); MPI_Finalize(); return 0; }
Parallel Computing: OpenMPI 9

12/10/11

LAM-MPI Hello_World

12/10/11

Parallel Computing: OpenMPI

10

Collective Communication

Communication involes a group or groups of processes. There is a process called root. This process manage other proceses. Demo using:

MPI_Bcast MPI_SCATTER MPI_REDUCE MPI_GATHER


Parallel Computing: OpenMPI 11

12/10/11

MPI_Bcast

MPI_Bcast (void *buf, int count, MPI_Datatype dtype, int root, MPI_Comm comm); *buf is data of root process. After broadcast operation, all processes will get the same variables's value as root process.

12/10/11

Parallel Computing: OpenMPI

12

MPI_SCATTER

MPI_Scatter (void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm); Divide *sendbuf into N equal partitions (N = number of proceses), send to each process by order of rank.

12/10/11

Parallel Computing: OpenMPI

13

MPI_Reduce

MPI_Reduce (void *sendbuf, void *recvbuf, int count, MPI_Datatype dtype, MPI_Op op, int root, MPI_Comm comm); After the reduction, all processes have sent count elements from their send buffer to the root process. MPI_Op op calculate based on these elements

12/10/11

Parallel Computing: OpenMPI

14

MPI_Reduce

Some pre-defined MPI Operation are:

MPI_MAX / MPI_MIN MPI_SUM MPI_PROD MPI_LAND MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR
Parallel Computing: OpenMPI 15

12/10/11

MPI_Gather

MPI_Gather (void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm); MPI_Gather is a complete reverse of the description of MPI_Scatter(). After gathering, root process has full data of each process in group.

12/10/11

Parallel Computing: OpenMPI

16

Demo - Problem

Problem:

Input:

An integer array a[1..n] N : Number of threads

Output: Min & Max element of that array 2 machines RAM: 512MB, HDD: 8G Bridge connection
Parallel Computing: OpenMPI 17

Cluster Network info:


12/10/11

Demo-Solution

MAX

12/10/11

Parallel Computing: OpenMPI

18

Demo-Result
n\N 100 1000 10000 20000 50000 1 10 (TU) 12 680 2720 17320 2 13 (TU) 15 190 710 4380 3 30 (TU) 31 110 340 1950 4 35 (TU) 38 80 210 1130 6 37 (TU) 40 68 120 540 8 50 (TU) 51 60 90 340

n= Number of elements in input array N= Number of used threads Time Unit (TU) : Clock Tick/1000
12/10/11 Parallel Computing: OpenMPI 19

Demo-Execution

Live Demo See source code in collective_comm.c file.

12/10/11

Parallel Computing: OpenMPI

20

Conclusion

LAM-MPI(now openMPI) is good evironment for Message Passing Interface. Configurations to use LAM-MPI isn't easy. Collective Communication supports processes in group can communicate each other, exchange messages,.... under root process control. Parallel programming takes effect when the size of input is big.
Parallel Computing: OpenMPI 21

12/10/11

You might also like