[go: up one dir, main page]

0% found this document useful (0 votes)
13 views9 pages

Unit 3

Uploaded by

Rishav Mandal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views9 pages

Unit 3

Uploaded by

Rishav Mandal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

UNIT -3

Inter-process Communication (IPC)

Process Synchronization

A situation where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order in which
the access takes place, is called a race condition.

Producer-Consumer Problem

Paradigm for cooperating processes, producer process produces information that is


consumed by a
consumer process.
To allow producer and consumer processes to run concurrently, we must have
available a buffer of items that can be filled by the producer and emptied by the consumer.
A producer can produce one item while the consumer is consuming another item. The
producer and consumer must be synchronized, so that the consumer does not try to
consume an item that has not yet been produced. In this situation, the consumer must wait
until an item is produced.
✦ unbounded-buffer places no practical limit on the size of the buffer.
✦ bounded-buffer assumes that there is a fixed buffer size.

Bounded-Buffer – Shared-Memory Solution


The consumer and producer processes share the following variables.
Shared data

#define
BUFFER_
SIZE 10
Typedef
struct
{
...
} item;
item
buffer[BU
FFER_SIZ
E]; int in =
0;
int out = 0;
Solution is correct, but can only use BUFFER_SIZE-1 elements.
The shared buffer is implemented as a circular array with two logical pointers: in and
out. The variable in points to the next free position in the buffer; out points to the first
full position in the buffer. The buffer is empty when in
== out ; the buffer is full when ((in + 1) % BUFFERSIZE) == out.

The code for the producer and consumer processes follows. The producer process has a
local variable
nextproduced in which the new item to be produced is stored:
Bounded-Buffer – Producer Process

item
nextPr
oduce
d;
while
(1)
{
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */

buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}

Bounded-Buffer – Consumer Process

item nextConsumed;

The critical section problem

Consider a system consisting of n processes {Po,P1, ..., Pn-1). Each process has a
segment of code, called a critical section, in which the process may be changing common
variables, updating a table, writing a file, and so on. The important feature of the system
is that, when one process is executing in its critical section, no other process is to be
allowed to execute in its critical section. Thus, the execution of critical sections by the
processes is mutually exclusive in time. The critical-section problem is to design a
protocol that the processes can use to cooperate. Each process must request permission to
enter its critical section. The section of code implementing this request is the entry section.
The critical section may be followed by an exit section. The remaining code is the
remainder section.

do{
Entry section
C
ritical
sectio
n Exit
sectio
n
Remainder section
}while(1);

A solution to the critical-section problem must satisfy the following three requirements:

1. Mutual Exclusion: If process Pi is executing in its critical section, then no other


processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder section can participate in the decision on which will enter its critical section
next, and this selection cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bound on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.

Peterson’s solution

Peterson’s solution is a software based solution to the critical section problem.


Consider two processes P0 and P1. For convenience, when presenting Pi, we use
Pi to denote the other process; that is, j == 1 - i.
The processes
share two variables:
boolean flag [2] ;
int turn;
Initially flag [0] = flag [1] = false, and the value of turn is immaterial (but is either 0 or
1). The structure of process Pi is shown below.
do{

do{
flag[i]=true turn=j
while(flag[j] && turn==j);
critical section flag[i]=false
Remainder section
}while(1);

To enter the critical section, process Pi first sets flag [il to be true and then sets
turn to the value j, thereby asserting that if the other process wishes to enter the critical
section it can do so. If both processes try to enter at the same time, turn will be set to both
i and j at roughly the same time. Only one of these assignments will last; the other will
occur, but will be overwritten immediately. The eventual value of turn decides which of
the two processes is allowed to enter its critical section first.
We now prove that this solution is correct. We need to show that:
1. Mutual exclusion is preserved,
2. The progress requirement is satisfied,
3. The bounded-waiting requirement is met.
To prove property 1, we note that each Pi enters its critical section only if either flag
[jl == false or turn ==
i. Also note that, if both processes can be executing in their critical sections at the same
time, then flag [i] ==flag [jl == true. These two observations imply that P0 and P1 could
not have successfully executed their while statements at about the same time, since the
value of turn can be either 0 or 1, but cannot be both. Hence, one of the processes say Pj-
must have successfully executed the while statement, whereas Pi had to execute at least
one additional statement ("turn == j"). However, since, at that time, flag [j] == true, and
turn == j, and this condition will persist as long as Pi is in its critical section, the result
follows:

To prove properties 2 and 3, we note that a process Pi can be prevented from


entering the critical section only if it is stuck in the while loop with the condition flag [j]
== true and turn == j; this loop is the only one. If Pi is not ready to enter the critical section,
then flag [ j ] == false and Pi can enter its critical section. If Pi has set flag[j] to true and
is also executing in its while statement, then either turn == i or turn == j. If turn == i, then
Pi will enter the critical section. If turn == j, then Pi will enter the critical section. However,
once Pi exits its critical section, it will reset flag [ jl to false, allowing Pi to enter its critical
section. If Pi resets flag [ j 1 to true, it must also set turn to i.
Thus, since Pi does not change the value of the variable turn while executing the
while statement, Pi will enter the critical section (progress) after at most one entry by Pi
(bounded waiting).

Semaphores

The solutions to the critical-section problem presented before are not easy to
generalize to more complex problems. To overcome this difficulty, we can use a
synchronization tool called a semaphore. A semaphore S is an integer variable that, apart
from initialization, is accessed only through two standard atomic operations: wait and
signal. These operations were originally termed P (for wait; from the Dutch proberen, to
test) and V (for signal; from verhogen, to increment). The classical definition of wait in
pseudocode is

wait(S) { while (S <= 0)


; // no-op S --;
}
The classical definitions of signal in pseudocode is

Signal(S){ S++;
}

Modifications to the integer value of the semaphore in the wait and signal
operations must be executed indivisibly. That is, when one process modifies the
semaphore value, no other process can simultaneously modify that same semaphore value.
In addition, in the case of the wait (S), the testing of the integer value of S (S 5 O), and its
possible modification (S--), must also be executed without interruption.

Usage
We can use semaphores to deal with the n-process critical-section problem. The n
processes share a semaphore, mutex (standing for mutual exclusion), initialized to 1. Each
process Pi is organized as shown in Figure. We can also use semaphores to solve various
synchronization problems.
For example, consider two concurrently running processes: PI with a statement S1 and P2
with a statement S2. Suppose that we require that S2 be executed only after S1 has
completed. We can implement this scheme readily by letting P1 and P2 share a common
semaphore synch, initialized to 0, and by inserting the statements in process PI, and the
statements

wait (synch) ; s2;


in process P2. Because synch is initialized to 0, P2 will
execute S2 only after PI has invoked signal (synch) , which
is after S1.

s1;
signal (synch) ; do {
wait (mutex) ;

critical section signal (mutex) ;


remainder section
} while (1);

Semaphores are of two types −


1. Binary semaphore
2. Counting semaphore
Binary Semaphore
Binary semaphore is a semaphore with the integer value ranges over 0 and 1 whereas the
counting semaphore's integer value ranges over unrestricted domain.
Binary semaphores are easier to implement comparing with the counting semaphore.
Binary semaphore allows only one thread to access the resource at a time. But counting
semaphore allows N accesses at a time. The 2 operations that are defined for binary semaphores
are take and release.
Take:Taking a binary semaphore brings it in the “taken” state, trying to take a semaphore that
is already taken enters the invoking thread into a waiting queue.
Release:Releasing a binary semaphore brings it in the “not taken” state if there are not queued
threads. If there are queued threads then a thread is removed from the queue and resumed, the
binary semaphore remains in the “taken” state. Releasing a semaphore that is already in its “not
taken” state has no effect.
Binary semaphores have no ownership attribute and can be released by any thread or interrupt
handler regardless of who performed the last take operation.
Because of these binary semaphores are often used to synchronize threads with external events
implemented as ISRs, for example waiting for a packet from a network or waiting that a button
is pressed.
Counting Semaphore:
Counting Semaphore may have value to be greater than one, typically used to allocate resources
from a pool of identical resources.
A counting semaphore is a synchronization object that can have an arbitrarily large number of
states. The internal state is defined by a signed integer variable, the counter.
The counter value (N) has a precise meaning:
a. Negative, there are exactly -N threads queued on the semaphore.
b. Zero, no waiting threads, a wait operation would put in queue the invoking thread.

Classic Problems of Synchronization

We present a number of different synchronization problems as examples for a large class


of concurrency-control problems. These problems are used for testing nearly every newly
proposed synchronization scheme.Semaphores are used for synchronization in our
solutions.

The Bounded-Buffer Problem

The bounded-buffer problem is commonly used to illustrate the power of


synchronization primitives. We present here a general structure of this scheme, without
committing ourselves to any particular implementation. We assume that the pool consists
of n buffers, each capable of holding one item. The mutex semaphore provides mutual
exclusion for accesses to the buffer pool and is initialized to the value 1. The empty and
full semaphores count the number of empty and full buffers, respectively. The semaphore
empty is initialized to the value n; the semaphore f u l l is initialized to the value 0.
The code for the producer process is

The Readers- Writers Problem

A data object (such as a file or record) is to be shared among several concurrent


processes. Some of these processes may want only to read the content of the shared object,
whereas others may want to update (that is, to read and write) the shared object. We
distinguish between these two types of processes by referring to those processes that are
interested in only reading as readers, and to the rest as writers. Obviously, if two readers
access the shared data object simultaneously, no adverse effects will result. However, if a
writer and some other process (either a reader or a writer) access the shared object
simultaneously, chaos may ensue.
To ensure that these difficulties do not arise, we require that the writers have
exclusive access to the shared object. This synchronization problem is referred to as the
readers-writers problem. Since it was originally stated, it has been used to test nearly every
new synchronization primitive. The readers-writers problem has several variations, all
involving priorities. The simplest one, referred to as the first readers-writers problem,
requires that no reader will be kept waiting unless a writer has already obtained permission
to use the shared object. In other words, no reader should wait for other readers to finish
simply because a writer is waiting. The second readers-writers problem requires that, once
a writer is ready, that writer performs its write as soon as possible. In other words, if a
writer is waiting to access the object, no new readers may start reading.
A solution to either problem may result in starvation. In the first case, writers may
starve; in the second case, readers may starve. For this reason, other variants of the
problem have been proposed. In this section, we present a solution to the first readers-
writers problem.
In the solution to the first readers-writers problem, the reader

The code for a writer process is


do{
wait (wrt) ;
...
writing is performed
...
signal(wrt);
}while(1);

The Dining-Philosophers Problem

Consider five philosophers who spend their lives thinking and eating. The
philosophers share a common circular table surrounded by five chairs, each belonging to
one philosopher. In the center of the table is a bowl of rice, and the table is laid with five
single chopsticks. When a philosopher thinks, she does not interact with her colleagues.
From time to time, a philosopher gets hungry and tries to pick up the two chopsticks that
are closest to her (the chopsticks that are between her and her left and right neighbors). A
philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a
chopstick that is already in the hand of a neighbor. When a hungry philosopher has both
her chopsticks at the same time, she eats without releasing her chopsticks. When she is
finished eating, she puts down both of her chopsticks and starts thinking again.
The
structure of
philosopher
i do {
wait (chopstick[i]) ;
wait (chopstick[(i+1) % 5] ) ;
...
eat
...
signal
(chopstick
[i] ;
signal(chop
stick[(i+1)
% 5] ) ;
...
think
...
) while (1);
.
The dining-philosophers problem is considered a classic synchronization problem, neither
because of its practical importance nor because computer scientists dislike philosophers,
but because it is an example of a large class of concurrency-control problems. It is a simple
representation of the need to allocate several resources among several processes in a
deadlock- and starvation free manner.
One simple solution is to represent each chopstick by a semaphore. A philosopher tries to
grab the chopstick by executing a wait operation on that semaphore; she releases her
chopsticks by executing the signal operation on the appropriate semaphores.

Monitors

Another high-level synchronization construct is the monitor type. A monitor is


characterized by a set of programmer-defined operators. The representation of a monitor
type consists of declarations of variables whose values define the state of an instance of
the type, as well as the bodies of procedures or functions that implement operations on
the type. The syntax of a monitor is
.
The representation of a monitor type cannot be used directly by the various processes.
Thus, a procedure defined within a monitor can access only those variables declared
locally within the monitor and its formal parameters. Similarly, the local variables of a
monitor can be accessed by only the local procedures.
The monitor construct ensures that only one process at a time can be active within the
monitor. Consequently, the programmer does not need to code this synchronization
constraint explicitly. However, the monitor construct, as defined so far, is not sufficiently
powerful for modeling some
synchronization schemes. For this purpose, we need to define additional synchronization
mechanisms. These mechanisms are provided by the condition operations construct. A
programmer who needs to write her own tailor-made synchronization scheme can define
one or more variables of type condition:
condition x, y;
The only operations that can be invoked on a condition variable are wait and signal. The
operation means that the process invoking this operation is suspended until another
process invokes
The x . signal (1 operation resumes exactly one suspended process. If no process is
suspended, then the signal operation has no effect; that is, the state of x is as though the
operation were never executed (Figure 7.21). Contrast this operation with the signal
operation associated with semaphores, which always affects the state of the semaphore.
Now suppose that, when the x. signal () operation is invoked by a process P, there is a
suspended process Q associated with condition x. Clearly, if the operations suspended
process Q is allowed to resume its execution, the signaling process P must wait. Otherwise,
both P and Q will be active simultaneously within the monitor. Note, however, that both
processes can conceptually continue with their execution. Two possibilities exist:
1. P either waits until Q leaves the monitor, or waits for another condition.
2. Q either waits until P leaves the monitor, or waits for another condition.
There are reasonable arguments in favor of adopting either option 1 or option 2.
Since P was already executing in the monitor, choice 2 seems more reasonable. However,
if we allow process P to continue, the "logical" condition for which Q was waiting may
no longer hold by the time Q is resumed. Choice 1 was advocated by Hoare, mainly
because the preceding argument in favor of it translates directly to simpler and more
elegant proof rules. A compromise between these two choices was adopted in the language
Concurrent C. When process P executes the signal operation, process Q is immediately
resumed. This model is less powerful than Hoare's, because a process cannot signal more
than once during a single procedure call.

You might also like