Experiments
Experiments
Algorithm
Algorithm:
3. Steps:
Algorithm:
3. Steps:
○ Divide the work of computing the dot product among multiple processes.
Code:-
from multiprocessing import Pool
def add_elements(pair):
return pair[0] + pair[1]
if __name__ == "__main__":
A = [1, 2, 3, 4, 5]
B = [10, 20, 30, 40, 50]
print("Vector A:", A)
print("Vector B:", B)
print("Vector Addition (A + B):", result)
Code:-
from multiprocessing import Pool
print("Vector A:", A)
print("Vector B:", B)
print("Dot Product:", dot_product)
ALGORITHM
1. Loop Work-Sharing Algorithm
1. Define a computational function (e.g., square of a number).
2. Create a list of inputs.
3. Use a process pool to parallelly map the function to inputs.
4. Collect and display results
if __name__ == "__main__":
data = [1, 2, 3, 4, 5, 6, 7, 8, 9]
def task1():
time.sleep(1)
print("Task 1: Data preprocessing done.")
def task2():
time.sleep(2)
print("Task 2: Model training done.")
def task3():
time.sleep(1.5)
print("Task 3: Evaluation done.")
if __name__ == "__main__":
# Creating processes (sections)
p1 = Process(target=task1)
p2 = Process(target=task2)
p3 = Process(target=task3)
# Start all
p1.start()
p2.start()
p3.start()
# Wait for all to finish
p1.join()
p2.join()
p3.join()
Aim:- To demonstrate OpenMP – Combined parallel loop reduction and Orphaned parallel
loop reduction
Algorithm
1. Input: A list of values (e.g., numbers to sum) and a reduction operation (e.g., sum).
3. Steps:
1. Split the input data into chunks that can be processed in parallel.
4. Combine all the partial results from all workers into a final result (reduction
operation)
if __name__ == "__main__":
data = [1, 2, 3, 4, 5, 6, 7, 8, 9]
total_sum = sum(partial_sums)
Algorithm
Algorithm for Orphaned Parallel Loop Reduction:
1. Input: A list of values and a shared reduction variable (e.g., a global sum).
3. Steps:
3. If not synchronized properly, multiple threads may write to the shared variable
at the same time, causing issues (orphaned updates).
Code:-
from multiprocessing import Pool
def compute_sum(data):
with Pool() as pool:
partial_sums = pool.map(identity, data)
return sum(partial_sums)
if __name__ == "__main__":
data = [10, 20, 30, 40, 50]
result = compute_sum(data)
print("Sum using orphaned parallel loop:", result)
Exp 4. OpenMP – Matrix multiply (specify run of a GPU card, large scale data … Complexity
of the problem need to be specified)
Code:-
import numpy as np
from multiprocessing import Pool
for i in range(A.shape[0]):
C[i] = result[i]
return C
if __name__ == "__main__":
# Large matrices
A = np.random.rand(1000, 1000)
B = np.random.rand(1000, 1000)
C = matrix_multiply(A, B)
print("Matrix multiplication complete. Result shape:", C.shape)
if __name__ == "__main__":
# Large matrices (for GPU, much larger data can be used)
A = np.random.rand(1000, 1000)
B = np.random.rand(1000, 1000)
C = matrix_multiply_gpu(A, B)
print("Matrix multiplication complete on GPU. Result shape:", C.shape)
Output:-
Aim:- To demonstrate basic MPI functionality using Python, you can use the mpi4py library
Algorithm:-
Step 5: Determine:
● Its rank
Code:-
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
if __name__ == "__main__":
mpi_hello_world()
Output:-
Aim:- Demonstrate how one MPI process (e.g., rank 0) sends a message and another
process (e.g., rank 1) receives it.
Code:-
from mpi4py import MPI
def point_to_point_mpi():
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
data = "Hello from Process 0"
comm.send(data, dest=1, tag=11)
print(f"Process {rank} sent data: '{data}' to Process 1")
elif rank == 1:
received_data = comm.recv(source=0, tag=11)
print(f"Process {rank} received data: '{received_data}' from Process 0")
if __name__ == "__main__":
point_to_point_mpi()
Output:-
Aim:-To demonstrate synchronization using MPI Barrier, which ensures that all processes
reach the same point in the program before proceeding.
Algorithm:
Code:-
import time
def mpi_synchronization_demo():
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
# Synchronization point
comm.Barrier()
time.sleep(0.1 * rank)
if __name__ == "__main__":
mpi_synchronization_demo()
Output:-
Aim:- To implement an MPI collective operation in Python, you can use the mpi4py library,
which provides bindings for MPI functions.
Algorithm
1.Initialization:
○ Prepare a data array (e.g., [0, 1, 2, ..., size-1]), where each element
represents a data chunk that will be sent to each process.
3. Scatter Operation:
● Use the scatter function to distribute the data from the root process to all other
processes.
● Each process (including the root) will receive a portion of the data from the root.
4. Computation:
● Each process (including the root) performs a computation on the received data (e.g.,
doubling the value).
5. Gather Operation:
● After computation, each process sends its result back to the root process using the
gather function.
● The root process gathers all the processed data from all the other processes.
7.Display Results:
● Each process prints the data it received and the computation result.
● The root process prints the final gathered result.
Code:-
from mpi4py import MPI
# Gather operation: send the processed data from all processes to root
gathered_data = comm.gather(local_result, root=0)
Output:-
Algorithm:-
1. Initialize the MPI environment.
2. Get the number of processes and the rank of the current process.
3. On the root process:
o Create a list of numbers to compute (e.g., squares of numbers).
4. Use Scatter to distribute parts of the list to all processes.
5. Each process computes the square of its number (local computation).
6. Use Gather or Reduce to collect results at the root process.
7. Display the final results at the root process.
Code:-
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
# Step 1: Root process prepares the data
if rank == 0:
data = [i for i in range(size)]
else: data = None
# Step 2: Scatter data to all processes
recv_data = comm.scatter(data, root=0)
# Step 3: Each process performs computation (e.g., square)
local_result = recv_data ** 2
# Step 4: Gather the results to root process
results = comm.gather(local_result, root=0)
# Step 5: Root process displays final result
if rank == 0:
print("Input data:", [i for i in range(size)])
print("Squared results (gathered):", results)
Output:-
Code:-
from mpi4py import MPI
import time
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
elif rank == 1:
buf = bytearray(100) # Buffer to hold the incoming message
req = comm.Irecv([buf, MPI.CHAR], source=0, tag=10)
print("Process 1 is doing something else while receiving...")
time.sleep(1) # Simulate other work
req.Wait()
print("Process 1 received:", buf.decode().rstrip('\x00'))
Output:-