[go: up one dir, main page]

0% found this document useful (0 votes)
129 views13 pages

Mpi Lecture Trapezoidal Rule 6march2024

This document discusses numerical integration using the trapezoidal rule in MPI. It explains how to implement the trapezoidal rule using MPI in C to calculate the integral of a function in parallel. It discusses enhancements like dealing with input/output and calculating a global sum. It introduces collective communication functions like MPI_Reduce that can be used to calculate a global sum more efficiently than peer-to-peer communication for many processes. Collective communication requires all processes to call the same function, with compatible arguments, while peer-to-peer uses communicators and tags.

Uploaded by

SK Tamilan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views13 pages

Mpi Lecture Trapezoidal Rule 6march2024

This document discusses numerical integration using the trapezoidal rule in MPI. It explains how to implement the trapezoidal rule using MPI in C to calculate the integral of a function in parallel. It discusses enhancements like dealing with input/output and calculating a global sum. It introduces collective communication functions like MPI_Reduce that can be used to calculate a global sum more efficiently than peer-to-peer communication for many processes. Collective communication requires all processes to call the same function, with compatible arguments, while peer-to-peer uses communicators and tags.

Uploaded by

SK Tamilan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

MPI: Numerical Integration, P2P and Collective Communication

Kameswararao Anupindi

Department of Mechanical Engineering


Indian Institute of Technology Madras (IITM)

March, 2024

1 / 32
A few potential pitfalls of MPI_Send/MPI_Recv

Process A Process B

x MPI_Recv

MPI_Send x

▶ Non-matching tags
▶ Rank of the destination process is the same as that of the source.

2 / 32
The Trapezoidal Rule approximation

∫ b
h
f ( x ) dx = [f ( x0 ) + f ( xn ) + 2 ( f ( x1 ) + f ( x2 )... + f ( xn−1 ))] (1)
a 2

3 / 32
The Trapezoidal Rule using MPI in C

4 / 32
The Trapezoidal Rule using MPI in C contd...

5 / 32
The Trapezoidal Rule using MPI in C contd...

6 / 32
The Trapezoidal Rule - Enhancements - Dealing with input and output

7 / 32
The Trapezoidal Rule - Enhancements - Calculating global sum

▶ Original sum: 7 receives and adds


▶ Tree sum: 3 receives and adds
▶ if nprocs = 1024, tree sum would do
only 10 receives and adds

8 / 32
The Trapezoidal Rule - Calculating global sum - another way

▶ Several possibilities exist


▶ A method works best for small trees,
and another for large trees!
▶ A method may work best for system A,
and another for system B.
▶ MPI provides a global sum that works
the best in the form of Collective
Communication.

9 / 32
Collective Communication - MPI_Reduce

10 / 32
Collective Communication - MPI_Reduce

MPI_MAX MPI_LOR
MPI_MIN MPI_BAND
MPI_SUM MPI_BOR
MPI_PROD MPI_MAXLOC
MPI_LAND MPI_MINLOC

11 / 32
Collective communication: Reduce

MPI_Reduce(
void *send_buffer,
void *receive_buffer,
int count,
MPI_Datatype datatype,
MPI_Op operator,
int root,
MPI_Comm communicator)

12 / 32
Difference between Collective and P2P communications

▶ All the processes must call the same MPI Collective Communication (CC)
▶ The arguments passed by each process to MPI CC must be compatible
▶ All processes must supply an output_data_p, although this is needed only on root
▶ While P2P are matched using communicator and tags, MPI CC are matched solely on
the basis of communicator and order of calling.

13 / 32

You might also like