[go: up one dir, main page]

0% found this document useful (0 votes)
18 views7 pages

Module-4 CC Notes

The document covers key concepts in parallelism, multiprocessing, and task computing, defining terms such as multicore systems and various frameworks for task computing. It explains the relationship between processes and threads, highlighting their differences and applications in multithreading. Additionally, it discusses MPI program structure and techniques for parallel computation, as well as workflow technologies and their execution frameworks.

Uploaded by

SyedaFatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views7 pages

Module-4 CC Notes

The document covers key concepts in parallelism, multiprocessing, and task computing, defining terms such as multicore systems and various frameworks for task computing. It explains the relationship between processes and threads, highlighting their differences and applications in multithreading. Additionally, it discusses MPI program structure and techniques for parallel computation, as well as workflow technologies and their execution frameworks.

Uploaded by

SyedaFatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

MODULE-3:

(2 MARKS)

Q1. Define parallelism & multiprocessing


Ans Parallelism is a technique for improving the performance of computers
Multi processing is the execution of multiple programs in a single machine

Q2. What is multicore system


Ans. Multi-core systems are composed of a single processor featuring multiple processing cores that
shares the memory.Each core has generally it’s own L1 cache & the L2 cache is common to all the
cores which connect to it by means of a share bus
Q3. List the Frameworks for Task computing
Ans. 1. Condor
2. Globus Toolkit
3. Sun Grid Engine
4. BOINC
5. Nimrod/G
6.Aneka
Q4. What is Task Computing?
Ans. Task computing is a wide area of distributed system programming encompassing several
different models of architecting distributed applications, which, eventually, are based on the same
fundamental abstraction such as task.
Q5. List four Task-based application models.
Ans. Task-based application models
1.Embarrassingly parallel applications
2.Parameter sweep applications
3.MPI applications
4.Workflow applications with task dependencies

Q6. What are the categories of Computing?


Ans. categories of Computing
1.High-performance computing (HPC)
2.High-throughput computing (HTC)
3.The many-task computing (MTC)
(8 MARKS)
Q1. Explain the relationship between Processes and Threads

Ans: Process
 A process is an instance of a running program.
 It has its own memory space, system resources (like file handles, sockets), and one or more
threads.
 It is isolated from other processes.
Thread
 A thread is the smallest unit of execution within a process.
 Multiple threads can exist within a single process, sharing the same memory space and resources.
 Threads run concurrently, allowing parallelism.
Relationship Between Process and Threads
Process Threads
A process can contain 1 or more threads Threads are part of a process
Each process has its own memory Threads share memory within the process
Processes are isolated from each other Threads are not isolated
Heavyweight (more overhead) Lightweight (less overhead)
Switching between processes is expensive Switching between threads is faster

1. Multithreaded Applications:
o Web servers (like Nginx, Apache), databases, and microservices use threads to handle
multiple client requests simultaneously.
o Example: In AWS Lambda or Azure Functions, a function instance (process) may
spawn multiple threads to handle parallel tasks.
2. Containerization & Microservices:
o A microservice may run in its own process inside a container (e.g., Docker), but internally, it
uses threads to manage concurrent operations.
3. Load Balancing & Autoscaling:
o Cloud platforms scale by adding more processes (instances or containers) or optimizing
performance within a process using multithreading.
4. Virtual Machines:
o A single VM process may run multiple threads for I/O operations, CPU scheduling, etc.
5. Parallel & Distributed Computing:
o Cloud-native apps often use multi-threaded programming models (e.g., multithreaded
Python, Java, or Go applications) for efficient parallelism.
Q2. Explain MPI program structure with a neat diagram

Ans: Message Passing Interface (MPI) is a specification for developing parallel programs that communicate by
exchanging messages. Compared to earlier models, MPI introduces the constraint of communication that
involves MPI tasks that need to run at the same time. MPI has originated as an attempt to create common
ground from the several distributed shared memory and message-passing.
MPI provides developers with a set of routines that
 Manage the distributed environment where MPI programs are executed
 Provide facilities for point-to-point communication
 Provide facilities for group communication
 Provide support for data structure definition and memory allocation
 Provide basic support for synchronization with blocking calls
To create an MPI application it is necessary to define the code for the MPI process that will be executed in
parallel. This program has, in general, the structure described in above figure. The section of code that is
executed in parallel is clearly identified by two operations that set up the MPI environment and shut it down,
respectively. In the code section defined within these two operations, it is possible to use all the MPI
functions to send or receive messages in either asynchronous or synchronous mode.
A common model used in MPI is the master-worker model, whereby one MPI process (usually the one with
rank 0) coordinates the execution of others that perform the same task. Once the program has been defined
in one of the available MPI implementations, it is compiled with a modified version of the compiler for the
language. This compiler introduces additional code in order to properly manage the MPI runtime. The
output of the compilation process can be run as a distributed application by using a specific tool provided
with the MPI implementation. A general installation that supports the execution of the MPI application is
composed of a cluster. In this scenario MPI is normally installed in the shared file system and an MPI
daemon is started on each node of the cluster in order to coordinate the parallel execution of MPI
applications. Once the environment is set up, it is possible to run parallel applications by using the tools
provided with the MPI implementation and to specify several options, such as the number of nodes to use to
run the application.
(10 marks)
Q1. Explain the different techniques for parallel computation with Threads

1. Domain Decomposition
2. Functional Decomposition
Q2. Explain workflow technologies with diagram
Ans

Ans: Design tools allow users to visually compose a workflow application. This specification is normally
stored in the form of an XML document based on a specific workflow language and constitutes the input of
the workflow engine, which controls the execution of the workflow by leveraging a distributed
infrastructure. In most cases, the workflow engine is a client side component that might interact directly with
resources or with one or several middleware components for executing the workflow. Some frameworks can
natively support the execution of workflow applications by providing a scheduler capable of directly
processing the workflow specification. Some of the most relevant technologies for designing and executing
workflow-based applications are Kepler, DAGMan, Cloudbus Workflow Management System, and
Offspring

You might also like