[go: up one dir, main page]

0% found this document useful (0 votes)
326 views48 pages

Process Synchronization Guide

The document discusses process synchronization and solutions to the critical section problem. It describes the consumer-producer problem as a classic example where two processes share a buffer. A race condition can occur when manipulating the shared count variable. The critical section problem requires mutual exclusion, progress, and bounded waiting. Solutions presented include Peterson's algorithm, hardware primitives like test-and-set and swap, and semaphores. Semaphores use wait and signal operations to control access to shared resources and can avoid busy waiting through blocking and waking processes.

Uploaded by

Anzar Imam
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
326 views48 pages

Process Synchronization Guide

The document discusses process synchronization and solutions to the critical section problem. It describes the consumer-producer problem as a classic example where two processes share a buffer. A race condition can occur when manipulating the shared count variable. The critical section problem requires mutual exclusion, progress, and bounded waiting. Solutions presented include Peterson's algorithm, hardware primitives like test-and-set and swap, and semaphores. Semaphores use wait and signal operations to control access to shared resources and can avoid busy waiting through blocking and waking processes.

Uploaded by

Anzar Imam
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 48

School of Computing Science

Simon Fraser University

CMPT 300: Operating Systems I

Ch 6: Process Synchronization

Dr. Mohamed Hefeeda

1
Objectives

 Understand
 The Critical-Section Problem
 And its hardware and software solutions

2
Consumer-Producer Problem

 Classic example of process coordination


 Two processes sharing a buffer
 One places items into the buffer (producer)
 Must wait if the buffer is full
 The other takes items from the buffer (consumer)
 Must wait if buffer is empty
 Solution: Keep a counter on number of items in the buffer

3
Producer Process

while (true) {
/* produce an item in nextProduced */
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}

4
Consumer Process

while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/*consume item in nextConsumed */
}

What can go wrong with this solution?


5
Race Condition

 count++ could be implemented as


 register1 = count
 register1 = register1 + 1
 count = register1
 count-- could be implemented as
 register2 = count
 register2 = register2 - 1
 count = register2
 Consider this execution interleaving with “count = 5” initially:
S0: producer executes register1 = count {register1 = 5}
S1: producer executes register1 = register1 + 1 {register1 = 6}
S2: consumer executes register2 = count {register2 = 5}
S3: consumer executes register2 = register2 - 1 {register2 = 4}
S4: producer executes count = register1 {count = 6 }
S5: consumer executes count = register2 {count = 4}

6
Race Condition

 Occurs when multiple processes manipulate shared data


concurrently and the result depends on the particular order
of manipulation
 Data inconsistency may arise
 Solution idea
 Mark code segment that manipulates shared data as critical
section
 If a process is executing its critical section, no other processes can
execute their critical sections

 More formally, any method that solves the Critical-Section


Problem must satisfy three requirements …

7
Critical-Section (CS) Problem
1. Mutual Exclusion - If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there
exist some processes that wish to enter their critical section, then
selection of the process that will enter the critical section next cannot
be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted
 Assumptions
 Each process executes at a nonzero speed
 No restriction on the relative speed of the N processes

8
Solutions for CS Problem

 Disable interrupts during running CS


 Currently running code would execute without
preemption
 Possible only on uniprocessor systems. Why?
• Every processor has its own interrupts
• Disabling interrupts in all processors is inefficient
 Any problems with this solution even on uniprocessor
systems?
• Users could make CS arbitrary large unresponsive system

 Solutions using software only


 Solutions using hardware support

9
Peterson’s Solution

 Software solution; no hardware support


 Two process solution
 Assume LOAD and STORE instructions are atomic (i.e.,
cannot be interrupted)
 may not always be true in modern computers
 The two processes share two variables:
 int turn;
 Boolean flag[2];
 turn indicates whose turn it is to enter critical section
 The flag array indicates whether a process is ready to
enter critical section
 flag[i] = true ==> process Pi is ready

10
Algorithm for Process Pi

while (true) {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j)
;
CRITICAL SECTION
flag[i] = FALSE;
REMAINDER SECTION
}
Does this algorithm satisfy the three
requirements?
Yes. Show this as an exercise.
11
Synchronization Hardware

 Many systems provide hardware support for critical section


code  more efficient and easier for programmers

 Modern machines provide special atomic (non-


interruptable) hardware instructions
 Either test a memory word and set value
 Or swap contents of two memory words

12
TestAndndSet Instruction

Definition:

boolean TestAndSet (boolean *target)


{
boolean rv = *target;
*target = TRUE;
return rv;
}

13
Solution using TestAndSet
 Shared boolean variable lock, initialized to false
while (true) {
while ( TestAndSet (&lock ) )
; /* do nothing
// critical section
lock = FALSE;
// remainder section
}

Does this algorithm satisfy the three requirements?

NO. A process can wait indefinitely for another faster process


that is accessing its CS. Check Fig 6.8 for a modified version.
14
Swap Instruction

Definition:
 void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp;
}

15
Solution using Swap

 Shared boolean variable lock initialized to FALSE;


 Each process has a local boolean variable key

while (true) {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
// critical section
lock = FALSE;
// remainder section
}

16
Semaphore

 Much easier to use than hardware-based solutions


 Semaphore S – integer variable
 Two standard operations to modify S:
 wait()
 signal()
 These two operations are indivisible (atomic)

17
Semaphore Operations

wait (S) {
while (S <= 0)
; // no-op, called busy waiting, spinlock
S--;
}

signal (S) {
S++;
}
 Later, we will see how to implement these
operations with no busy waiting
18
Semaphore Types and Usage

 Two types
 Counting semaphore: can be any integer value
 Binary semaphore: can be 0 or 1 (known as mutex locks)

 Usage examples:

 Mutual exclusion
Semaphore mutex; // initialized to 1
wait (mutex);
Critical Section
signal (mutex);

19
Semaphore Types and Usage (cont’d)

 Process synchronization: S2 in P2 should be executed


after S1 in P1
P1:
S1;
signal (sem);
P2:
wait (sem);
S2;
 Control access to a resource with finite number of
instances
 e.g., producer-consumer problem with finite buffer

20
Semaphore Implementation with no Busy Waiting

 Each semaphore has:


 value (of type integer)
 waiting queue

 Two operations:
 block – place the process invoking the operation in the waiting
queue
 wakeup – remove one of the processes from the waiting queue
and place it in the ready queue

21
Semaphore Implementation with no Busy Waiting
wait (S) {
value--;
if (value < 0) {
add this process to waiting queue
block();
}
}
signal (S) {
value++;
if (value <= 0) { /*some processes are waiting*/
remove a process P from waiting queue
wakeup(P);
}
}
22
Semaphore Implementation
 Must guarantee that no two processes can execute wait and
signal on same semaphore at same time
 Thus, implementation becomes the critical section problem
where wait and signal code are placed in critical section,
and protected by
 Disabling interrupts (uniprocessor systems only)
 Busy waiting or spinlocks (multiprocessor systems)
 Well, why do we not do the above in applications?
 Applications may spend long (and unknown) amount of time in
critical sections, unlike kernel which spends short and known
beforehand time in critical section (~ ten instructions)

23
Deadlock and Starvation

 Deadlock – two or more processes are waiting indefinitely for an


event that can be caused by only one of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
 Starvation – indefinite blocking. A process may never be removed
from the semaphore queue in which it is suspended

24
Classical Problems of Synchronization

 Bounded-Buffer (Producer-Consumer) Problem


 Dining-Philosophers Problem
 Readers-Writers Problem

 These problems are


 abstractions that can be used to model many other resource
sharing problems
 used to test newly proposed synchronization schemes

25
Bounded-Buffer Problem

 N buffers, each can hold one item


 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value N

26
Bounded Buffer Problem (cont’d)

 Structure of the producer process


while (true) {
// produce an item
wait (empty);
wait (mutex);
// add item to buffer
signal (mutex);
signal (full);
}
27
Bounded Buffer Problem (cont’d)

 Structure of the consumer process


while (true) {
wait (full);
wait (mutex);
// remove an item from buffer
signal (mutex);
signal (empty);
// consume removed item
}
28
Dining-Philosophers Problem

 Philosophers alternate between eating and thinking


 To eat, a philosopher needs two chopsticks (at her left and right)
 Models multiple processes sharing multiple resources
 Write a program for each philosopher s.t. no starvation/deadlock occurs
 Solution approach:
 Bowl of rice (data set)
 Array of semaphores: chopstick [5] initialized to 1

29
Dining-Philosophers Problem: Philosopher i

 While (true) {
wait ( chopstick[i] );
wait ( chopstick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
}
 What can go wrong with this solution?
 All philosophers pick their left chopsticks at same time
(deadlock)
 Solutions?
 Pick chopsticks only if both are available
 Asymmetric: odd philosopher picks left chopstick first, even
picks right first

30
Readers-Writers Problem

 A data set is shared among a number of concurrent


processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write
 Problem – allow multiple readers to read at the same time.
Only one single writer can access the shared data at the
same time.
 Shared Data
 Data set
 Semaphore mutex initialized to 1
 Semaphore wrt initialized to 1
 Integer readcount initialized to 0

31
Readers-Writers Problem (cont’d)

 The structure of a writer process

while (true) {
wait (wrt) ;

// writing is performed

signal (wrt) ;
}

32
Readers-Writers Problem (cont’d)
 Idea of reader processes:
 The first reader needs to check that there is no writer in CS
 Other readers access CS right away, but they need to update the current
number of readers in CS (readcount)

while (true) {
wait (mutex) ; // mutex: protects readcount
readcount ++ ;
if (readercount == 1) wait (wrt) ;
signal (mutex)

// reading is performed

wait (mutex) ;
readcount - - ;
if (redacount == 0) signal (wrt) ;
signal (mutex) ;
} 33
Readers-Writers Problem (cont’d)

 Some systems implement reader-writer locks


 E.g., Solaris, Linux, Pthreads API
 A process can ask for a reader-write lock either in read or write
mode

 When would you use reader-writer locks?


 Applications where it is easy to identify readers only and writers
only processes
 Applications with more readers than writers

 Tradeoff: cost vs. concurrency


 Reader-writer locks require more overhead to establish than
semaphores, but they provide higher concurrency by allowing
multiples readers in CS

34
Be Careful When Using Semaphores

 Some common programming problems …

 signal (mutex) …. wait (mutex)


• Multiple processes can access CS at the same
time

 wait (mutex) … wait (mutex)


• Processes may block for ever

 Forgetting wait (mutex) or signal (mutex)

35
Monitors

 Monitor: High-level abstraction that provides a convenient and effective


mechanism for process synchronization
 Compiler (not programmer) takes care of mutual exculsion
 Only one process may be active within the monitor at a time

monitor monitor_name
{
//shared variable declarations

procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code ( ….) { … }


}

36
Schematic View of a Monitor

37
Condition Variables

 condition x, y;
 Two operations on a condition variable:
 x.wait () – a process that invokes the operation is
suspended.
 x.signal () – resumes one of processes (if any) that
invoked x.wait ()
 What is the difference between condition variables and
semaphores?
 condition variable: if no process is suspended, signal has no effect
 semaphores: signal always increments semaphore's value
 Condition variables are usually used in monitors to provide a
way to suspend/awake processes

38
Monitor with Condition Variables

39
Solution to Dining Philosophers
monitor DP
{
enum {THINKING, HUNGRY, EATING} state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i); //check both chopsticks
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
40
}
Solution to Dining Philosophers (cont’d)

void test (int i) {


if ( (state[i] == HUNGRY) &&
(state[(i + 4) % 5] != EATING) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
} // end monitor
41
Solution to Dining Philosophers (cont’d)

 Each philosopher invokes the operations pickup() and


putdown() in the following sequence:

dp.pickup (i)

EAT

dp.putdown (i)

42
Monitors Implementation

 It is up to the compiler to ensure mutual exclusion in monitors


 Semaphores are usually used
 Languages like Java, C# (not C), and Concurrent Pascal provide monitors-like
mechanisms
 Java
public class SimpleClass {
....
public synchronized void insert(…) { …}
public synchronized void remove(…) { …}
….
}
 Java guarantees that once a thread starts executing a synchronized method, no
other thread can execute any other synchronized method in the class

 Java 1.5 has semaphores, condition variables, mutex locks, …


 In java.util.concurrent package

 Exercise: write a java solution for the Producer-Consumer problem

43
Synchronization Examples

 Windows XP

 Linux

 Pthreads

44
Windows XP Synchronization

 Masks interrupts to protect access to global resources on


uniprocessor systems (inside kernel)
 Uses spinlocks on multiprocessor systems

 Also provides dispatcher objects for thread synchronization


outside kernel, which can act as mutexes, semaphores, or
events (condition variables)

45
Linux Synchronization

 Linux:
 disables interrupts to implement short critical sections (on single
processor systems)

 Linux provides:
 semaphores
 Spinlocks (on SMP)
 Reader-writer locks

46
Pthreads Synchronization
#include <pthread.h>
 Pthreads API is OS-
pthread_mutex_t mutex;
independent
 It provides:
pthread_mutex_init(&mutex, null);
 mutex locks
pthread_mutex_lock(&mutex);
 condition variables
pthread_mutex_unlock(&mutex);

 extensions include:
 semaphores
 read-write locks #include <semaphore.h>
 spin locks sem_t sem;
 May not be portable sem_init(&sem, 0, 5);
sem_wait(&sem);
sem_post(&sem);
47
Summary
 Processor Synchronization
 Techniques to coordinate access to shared data
 Race condition
 Multiple processes manipulating shared data and result depends on
execution order
 Critical section problem
 Three requirements: mutual exclusion, progress, bounded waiting
 Software solution: Peterson’s Algorithm
 Hardware support: TestAndSet(), Swap()
• Busy waiting (or spinlocks)
 Semaphores:
• wait(), signal() must be atomic  moves the CS problem to
kernel
 Monitors: high-level constructs (compiler)
 Some classical synchronization problems

48

You might also like