[go: up one dir, main page]

0% found this document useful (0 votes)
30 views40 pages

PDC Lec 8

Uploaded by

laraib
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views40 pages

PDC Lec 8

Uploaded by

laraib
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Parallel and Distributed

Computing
Week 6
Content

Software Architecture
Threads and shared memory
Software
Architecture
• The architecture of a software system is a metaphor,
analogous to the architecture of a building.

• Software architecture refers to the fundamental

structures of a software system and the discipline of

creating such structures and systems.


Software
Architecture
• The architecture of a software system consists of its
structures, the decomposition into components, and
their interfaces and relationships.

• It describes both the static and the dynamic aspects of that


software system, so that it can be considered a building design and
flow chart for a software product.
Software
Architecture
Software Architecture
cont… different views
1. The conceptual view, which identifies entities and their relationships;

2. The runtime view, the components at system runtime, e.g.,


servers, or communication connections;

3. The process view, which maps processes at system runtime, while looking at
aspects like synchronization and concurrency.

4. The implementation view, which describes the system’s software


artifacts, e.g., subsystems, components, or source code.
Software Architecture
cont… to support
Harware
Process and Thread

• Process: A program is in execution is called process.

• Thread: is the segment of a process, means a process can have

multiple threads and these multiple threads are contained within a

process.
Process and Thread cont…
Process and Thread cont…

• For example in a word processor, a thread may check


spelling and grammar while another thread processes user
input (keystrokes), while yet another third thread loads
images from the hard drive, and a fourth does periodic
automatic backups of the file being edited.
Process and Thread cont…
Comparison
Basis
Process Thread
Definition A process is a program under execution. A thread is a segment of process.
Weight Heavy weight. Light weight.
Context More time required for context switching. Less time required for context switching.
switching
Communication between processes Communication between threads
Communication requires more time than threads. requires less time than processes .
Resource Processes require more resources than
Consumption threads. Threads generally need less resources.

Time for Processes require more time for creation. Threads require less time for creation.
creation
Time for Processes require more time for Threads require less time
termination termination. for
termination.
Process and Thread cont…

• A process with two threads of


execution, running on a single
processor.
Multithreading

• Incomputer architecture, multithreading is the ability of


a single central processing unit (CPU) to provide
multiple threads of execution concurrently, supported
by the operating system.

• Multithreading aims to increase utilization of a single core

by using thread-level parallelism.


Multithreading cont…

• Multithreading allow for multiple requests to be


satisfied simultaneously, without having to service requests
sequentially.
Multithreading cont…
Advantages
1. Responsiveness: If one thread completes its execution, then its
output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower
compared to process context switch. Process context switching
requires more overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple
threads in a single process, then we can schedule multiple threads
on multiple processor. This will make process execution faster.
Multithreading cont…
Advantages
4. Resource sharing: Resources like code, data, and files can be shared
among all threads within a process. Stack and registers can’t be
shared.
5. Communication: Communication between multiple threads is
easier, as compared to processes.
6. Enhanced throughput of the system: As each threads’ function is
considered as one job, then the number of jobs completed per unit
of time is increased, thus increasing the throughput of the system.
Process and Message
Passing
• Numerous programming languages (message passing paradigm)
and

libraries have been developed for explicit parallel programming.

• The message-passing programming paradigm is one of the oldest and

most widely used approaches for programming parallel computers.


Process and Message Passing
cont…
• There are two key attributes that characterize the message-
passing

programming paradigm.

• The first is that it assumes a partitioned address space and

the second is that it supports only explicit parallelization.


Process and Message Passing
cont…
• The logical view of a machine supporting the message-
passing

paradigm consists of p processes, each with its own address space.

• Instances of such a view come naturally from clustered workstations

and non-shared address space multicomputers.


Process and Message Passing cont…
Structure of Message-Passing
Programs
• Message-passing programs are often written using the:

• Asynchronous paradigm.

• Loosely synchronous paradigm.


Process and Message Passing cont…
Structure of Message-Passing
Programs
• In the asynchronous paradigm, all concurrent tasks
execute asynchronously (no coordination).

• This makes it possible to implement any parallel


algorithm.

• However, such programs can be harder to understand,


and can have nondeterministic behavior due to race
conditions.
Process and Message Passing cont…
Structure of Message-Passing
Programs
• Loosely synchronous programs are a good compromise.

• In such programs, tasks or subsets of tasks synchronized to


perform interactions.

• However, between these interactions, tasks execute completely


asynchronously.
Process and Message Passing
cont…
The Building Blocks: Send and Receive
Operations
• Since interactions are accomplished by sending and receiving

messages, the basic operations in the message-passing

programming paradigm are send and receive.

• In their simplest form, the prototypes of these operations are

defined as follows:
Process and Message Passing
The cont…
Building Blocks: Send and Receive
Operations
• send(void *sendbuf, int nelems, int dest)

• receive(void *recvbuf, int nelems, int source)


The sendbuf points to a buffer that stores the data to be sent

The recvbuf points to a buffer that stores the data to be received

The dest is the identifier of the process that receives the data

The source is the identifier of the process that sends the data
Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations
Process and Message Passing
The cont…
Building Blocks: Send and Receive
Operations
• The important thing to note is that process P0 changes the value of a

to 0 immediately following the send.

• The semantics of the send operation require that the value received

by process P1 must be 100 as opposed to 0.


Process and Message Passing
The cont…
Building Blocks: Send and Receive
Operations
• Most message passing platforms have additional hardware

support for sending and receiving messages.

1. They may support DMA (direct memory access) and

2. Asynchronous message transfer using network interface hardware.


Process and Message Passing
The cont…
Building Blocks: Send and Receive
Operations
• Network interfaces allow the transfer of messages from buffer

memory to desired location without CPU intervention.

• Similarly, DMA allows copying of data from one memory location to

another (e.g., communication buffers) without CPU support (once

they have been programmed).


Process and Message Passing
The cont…
Building Blocks: Send and Receive
Operations
• As a result, if the send operation programs the communication

hardware and returns before the communication operation has been

accomplished, process P1 might receive the value 0 in a instead of

100!
Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations Blocking Message
Passing Operations
• A simple solution to the dilemma presented in the code fragment above is

for the send operation to return only when it is semantically safe to do so.

• It simply means that the sending operation blocks until it can guarantee

that the semantics will not be violated on return irrespective of what

happens in the program subsequently. There are two mechanisms by which


Process and Message Passing cont…

The Building Blocks: Send and Receive


Operations Blocking Message
Passing Operations
• There are two mechanisms by which this can be achieved:

1. Blocking Non-Buffered Send/Receive

2. Blocking Buffered Send/Receive


Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations Blocking Message
Passing Operations
• Blocking Non-Buffered Send/Receive: In the first case, the send

operation does not return until the matching receive has been

encountered at the receiving process.

• When this happens, the message is sent and the send operation

returns upon completion of the communication operation.


Process and Message Passing cont…

The Building Blocks: Send and Receive


Operations Blocking Message
Passing Operations
• Typically, this process involves a handshake between the sending and

receiving processes. The sending process sends a request to

communicate to the receiving process.

• Since there are no buffers used at either sending or receiving ends,

this is also referred to as a non-buffered blocking operation .


Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations Blocking Message Passing
Operations
Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations Blocking Message
Passing Operations
• In cases (a) and (c), we notice that there is considerable idling at
the sending and receiving process.

• It is also clear from the figures that a blocking non-buffered protocol


is suitable when the send and receive are posted at roughly the same
time.
Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations Blocking Message
Passing Operations
• Blocking Buffered Send/Receive: A simple solution to the idling and
deadlocking
problem outlined above is to rely on buffers at the sending and receiving ends.

• We start with a simple case in which the sender has a buffer pre-allocated for
communicating messages.

• On encountering a send operation, the sender simply copies the data into the
Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations Blocking Message
Passing Operations
• The sender process can now continue with the program knowing that

any changes to the data will not impact program semantics.

• Note that at the receiving end, the data cannot be stored directly at

the target location since this would violate program semantics.

• Instead, the data is copied into a buffer at the receiver as well.


Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations Blocking Message Passing
Operations

(a) presence of communication hardware (b) absence of communication hardware, sender interrupts
with buffers at send and receive ends receiver & deposits data in buffer at receiver end
Process and Message Passing cont…
The Building Blocks: Send and Receive
Operations Blocking Message
Passing Operations
• The sender process can now continue with the program knowing that

any changes to the data will not impact program semantics.

• Note that at the receiving end, the data cannot be stored directly at

the target location since this would violate program semantics.

• Instead, the data is copied into a buffer at the receiver as well.


Thank you

You might also like