UNIT-6
Resource and
Process management
Unit-6 Weightage : 15%
Distributed Computing System (030090716)
Topics to be covered
1. Resource Management:
1. Desirable features of global scheduling algorithm
2. Task assignment
3. Load balancing
4. Load sharing
2. Process Management:
1. Process migration:
1. Desirable features of process migration
2. Process migration mechanisms
3. Advantages of process migration
2. Threads:
4. Motivation for using threads
5. Models for organizing threads
6. Issues in designing thread package
Unit 6: Resource and Process management 2 CGPIT, UTU
Introduction of Resource Management
Distributed systems are characterized by resource multiplicity and
system transparency.
Every distributed system consists of a number of resources
interconnected by a network.
Besides providing communication facilities, the network facilitates
resource sharing by migrating a local process and executing it at a
remote node of the network.
Resource manager of the distributed operating system to control
the assignment of resources.
A resource can be logical like “shared file” or physical like “CPU”.
For our purpose, we will consider a resource to be a CPU
(Processor) or Node.
Unit 6: Resource and Process management 3 CGPIT, UTU
Scheduling in Distributed Systems
A resource manager schedules the processes in a distributed
system to make use of the system resources.
Scheduling is to optimize resource usage, response time, network
congestion. It can be broadly classified into three types:
1. Task assignment approach
• In which each process is viewed as a collection of related tasks.
• These tasks are scheduled to suitable nodes so as to improve
performance.
2. Load-balancing approach
• In which all the processes submitted by the users are
distributed among the nodes of the system to equalize the
workload among the nodes.
Unit 6: Resource and Process management 4 CGPIT, UTU
Scheduling in Distributed Systems
3. Load-sharing approach
• Which simply attempts to conserve the ability of the system to
perform work by assuring that no node is idle while processes
wait for being processed.
• The scheduling algorithms that fall in this category do not
normally take care of the dynamically changing state of the
system.
• Before presenting a description of each of these techniques,
the desirable features of a good global scheduling algorithm
are presented.
Unit 6: Resource and Process management 5 CGPIT, UTU
Desirable Features Of A Good Global Scheduling Algorithm
1. No Prior Knowledge about the Processes
2. Dynamic In Nature
3. Quick Decision-Making Capability
4. Balanced System Performance and Scheduling Overhead
5. Stability
6. Scalability
7. Fault Tolerance
8. Fairness of Service
Unit 6: Resource and Process management 6 CGPIT, UTU
Task Assignment Approach
Each process is divided into multiple
tasks.
These tasks are scheduled to suitable
processor to improve performance.
It requires characteristics of all the processes to be known in
advance.
This approach does not take into consideration the dynamically
changing state of the system.
In this approach, a process is considered to be composed of
multiple tasks and the goal is to find an optimal assignment policy
for the tasks of an individual process.
Unit 6: Resource and Process management 7 CGPIT, UTU
Assumptions For Task Assignment Approach
A process has already been split into pieces called tasks.
The amount of computation required by each task and the speed
of each processor are known.
The cost of processing each task on every node of the system is
known.
The Inter process Communication (IPC) costs between every pair
of tasks is known.
Other constraints, such as resource requirements of the tasks and
the available resources at each node, precedence relationships
among the tasks, and so on, are also known.
Unit 6: Resource and Process management 8 CGPIT, UTU
Task Assignment Approach
A system with M CPUs and N processes has any of the following
three cases:
• M=N: Each process is allocated to one CPU.
• M>N: Some CPUs may remain idle (free) or work on earlier
allocated processes.
• M<N: There is a need to schedule processes on CPUs, and
several processes may be assigned to each CPU.
The main objective of performing CPU assignment is to:
• Minimize IPC cost.
• Obtain quick turnaround time.
• Achieve high degree of parallelism for efficient utilization.
• Minimize network traffic.
Unit 6: Resource and Process management 9 CGPIT, UTU
Task Assignment Approach - Example
There are two nodes, {N1, N2} and six tasks {T1, T2, T3, T4, T5, T6}.
Inter-task communication cost Execution costs
T1 T2 T3 T4 T5 T6 N1 N2
T1 0 6 4 0 0 2 T1 5 10
T2 6 0 8 12 3 0 T2 2
T3 4 8 0 0 11 0 T3 4 4
T4 0 12 0 0 5 0 T4 6 3
T5 0 3 11 5 0 0 T5 5 2
T6 12 0 0 0 0 0 T6 4
Task T6 cannot be executed on node N1 and task T2 cannot be
executed on node N2 since the resources they need are not
available on these nodes.
Unit 6: Resource and Process management 10 CGPIT, UTU
Task Assignment Approach - Example
1. Serial assignment, where tasks T1, T2, T3 are assigned to node N1
and tasks T4, T5, T6 are assigned to node N2:
• Execution cost, X = X11 + X21 + X31 + X42 + X52 + X62
= 5 + 2 + 4 + 3 + 2 + 4 = 20
• Communication cost, C = C14 + C15 + C16 + C24 + C25 + C26 + C34 + C35 + C36
= 0 + 0 + 12 + 12 + 3 + 0 + 0 + 11 + 0 =
38.
Hence total cost = 58.
Unit 6: Resource and Process management 11 CGPIT, UTU
Task Assignment Approach - Example
2. Optimal assignment, where tasks T1, T2, T3, T4, T5 are assigned
to node N1 and task T6 is assigned to node N2.
• Execution cost, X = X11 + X21 + X31 + X41 + X51 + X62
= 5 + 2 + 4 + 6 + 5 + 4 = 26
• Communication cost, C = C16 + C26 + C36 + C46 + C56
= 12 + 0 + 0 + 0 + 0 = 12
Hence Total cost = 38
Unit 6: Resource and Process management 12 CGPIT, UTU
Assignment graph for the assignment problem
In this approach, an optimal assignment is found by creating a
static assignment graph, as shown in Figure.
In this graph, nodes n1 and n2 represent the two nodes
(processors) of the distributed system and nodes t1 through t6
represent the tasks of the process.
• The weights of the edges joining pairs of task nodes represent intertask
communication costs.
• The weight on the edge joining a task node to node n1 represents the
execution cost of that task on node n2 and vice versa.
A cutset in this graph is defined to be a set of edges such that
when these edges are removed, the nodes of the graph are
partitioned into two disjoint subsets.
Each task node is reachable from either n1 or n2.
Unit 6: Resource and Process management 13 CGPIT, UTU
Assignment graph for the assignment problem
Unit 6: Resource and Process management 14 CGPIT, UTU
Load Balancing Approach
Need for good resource allocation scheme for Distributed Systems.
Unit 6: Resource and Process management 15 CGPIT, UTU
Load Balancing Approach
The distribution of loads to the processing elements is simply
called the load balancing.
The goal of the load balancing algorithms is to maintain the load
to each processing element such that all the processing elements
become neither overloaded nor idle.
Each processing element ideally has equal load at any moment of
time during execution to obtain the maximum performance
(minimum execution time) of the system.
Unit 6: Resource and Process management 16 CGPIT, UTU
CPU Scheduling ‐ Conventional
Issue: In multiprogramming environment (with single CPU), which
job is scheduled to the processor NEXT?
Need: To allocate the job for execution.
Different Scheduling Techniques:
1. First Come First Serve
Jobs 2. Priority Based
3. Round Robin Based
4. Multi Level Feedback Queues
Unit 6: Resource and Process management 17 CGPIT, UTU
Load(Job) Scheduling
Issue: In distributed environment, which job is scheduled to
which distributed processor?
Need: To allocate the job for execution.
JOB
JOB
.
.
.
?
JOB
Unit 6: Resource and Process management 18 CGPIT, UTU
Load Balancing
Issue: Redistribution of processes/task in the DCS.
Redistribution: Movement of processes from the heavily loaded
system to the lightly loaded system.
Need: To improve the Distributed Systems throughput.
? ?
Process
Proces
s
Unit 6: Resource and Process management 19 CGPIT, UTU
Classification of Load Balancing
Deterministic
Static
Heuristic/Probabilistic
Load Balancing
Centralized
Dynamic
Distributed
Unit 6: Resource and Process management 20 CGPIT, UTU
Static Load Balancing
In static algorithm the processes are assigned to the processors at
the compile time according to the performance of the nodes.
Once the processes are assigned, no change or reassignment is
possible at the run time.
Number of jobs in each node is fixed in static load balancing
algorithm.
Static algorithms use only information about the average behavior
of the system, ignoring current state of the system.
The static load balancing algorithms can be divided into two sub
classes:
1. Optimal static load balancing (Deterministic)
2. Sub optimal static load balancing (Probabilistic)
Unit 6: Resource and Process management 21 CGPIT, UTU
Static Load Balancing
1. Optimal Static Load Balancing Algorithm
• If all the information and resources related to a system are
known optimal static load balancing can be done.
2. Sub optimal static load balancing Algorithm
• Sub-optimal load balancing algorithm will be mandatory for
some applications when optimal solution is not found.
Unit 6: Resource and Process management 22 CGPIT, UTU
Deterministic vs Probabilistic
Deterministic algorithms are suitable when the process behavior is
known in advance.
If all the details like list of processes, computing requirements, file
requirements and communication requirements are known prior
to execution, then it is possible to make a perfect assignment.
In the case load is unpredictable or variable from minute to
minute or hour to hour, a Probabilistic/heuristic processor
allocation is preferred.
Unit 6: Resource and Process management 23 CGPIT, UTU
Dynamic Load Balancing
In dynamic load balancing algorithm assignment of jobs is done at
the runtime.
In DLB jobs are reassigned at the runtime depending upon the
situation.
The load will be transferred from heavily loaded nodes to the
lightly loaded nodes.
No decision is taken until the process gets executed.
This strategy collects the information about the system state and
about the job information.
As more information is collected, the algorithm can make better
decision.
Unit 6: Resource and Process management 24 CGPIT, UTU
Centralized Vs Distributed
Scheduling decision is carried out at Scheduling decision is carried out at
one single node called the centralized different nodes called the Distributed
node. nodes.
Every information is available at a Information is distributed at different
single node. nodes.
Centralized approach leads to a Distributed approach handle the
bottleneck as number of requests multiple requests by physically
increases. distributing among various nodes.
If centralized server fails, all scheduling If any node fails, all scheduling will be
in the system will fails. handled by other nodes.
Unit 6: Resource and Process management 25 CGPIT, UTU
Centralized Load Balancing
Unit 6: Resource and Process management 26 CGPIT, UTU
Distributed Load Balancing
? ?
Process
Proces
s
Unit 6: Resource and Process management 27 CGPIT, UTU
Cooperative Vs Non-Cooperative
In Cooperative Algorithm distributed In Non-cooperative algorithm the
entities cooperate with each other to individual entities make independent
make scheduling decisions. scheduling decisions.
Cooperative algorithms are more Non-cooperative algorithms are easier.
complex.
Cooperative algorithms are stable. Non-cooperative algorithms may not
be stable.
It involve large overhead than non- It involve minor overhead.
cooperative algorithms.
Unit 6: Resource and Process management 28 CGPIT, UTU
Issues in Designing Load Balancing Algorithms
1. Load estimation: determines how to estimate the workload of a
node in a distributed system.
2. Process transfer: decides whether the process can be executed
locally or remotely.
3. Static information exchange: determines how the system load
information can be exchanged among the nodes.
4. Location policy: determines the selection of a destination node
during process migration.
5. Priority assignment: determines the priority of execution of a set
of local and remote processes on a particular node.
6. Migration limiting policy: determines the total number of times a
process can migrate from one node to another.
Unit 6: Resource and Process management 29 CGPIT, UTU
Issues in Designing Load Balancing Algorithms
1. Load estimation :
• Memoryless method
• Pastrepeats
Neither the method of counting the total number of process or
nor the method of taking sum of the remaining service times of all
processes is suitable.
Acceptable method for load estimation is to measure the CPU
utilization of the nodes.
CPU utilization as a load indicator.
Unit 6: Resource and Process management 30 CGPIT, UTU
Issues in Designing Load Balancing Algorithms
2. Process transfer:
• Threshold value is used to decide whether a node is lightly or
heavily loaded.
• Thresholds, perhaps in terms of number of tasks, are generally
used. (Another threshold can be processor utilization).
• When a load on a node exceeds a threshold T, the node
becomes a sender. When it falls below a threshold, it becomes
a receiver.
Unit 6: Resource and Process management 31 CGPIT, UTU
Issues in Designing Load Balancing Algorithms
• Process transfer:
Unit 6: Resource and Process management 32 CGPIT, UTU
Issues in Designing Load Balancing Algorithms
3. Static information exchange:
• Periodic broadcast: May not be adaptive. Collection may be
done at high loads worsening system performance.
• Broadcast when State-change: only when state changes by a
certain degree.
• On-Demand exchange: only when a node is highly or lightly
loaded, i.e., when a node becomes a potential sender or
receiver.
• Exchange by polling: it can search for suitable partner by
randomly polling the other nodes one by one.
Unit 6: Resource and Process management 33 CGPIT, UTU
Issues in Designing Load Balancing Algorithms
4. Location policy:
• Threshold: A destination node is selected at random. This continue
until either a suitable node is found or the number of probes exceeds
a static probe limit(threshold).
• Shortest: : Poll a set of nodes. Select the receiver with shortest task
queue length.
• Bidding:
• Manager : A node having a process in need of location to execute.
• Contractor: A node that is able to accept the remote process.
• Pairing: Two nodes that differ greatly in load are temporarily paired
with each other and load balancing is carried out between them.
Unit 6: Resource and Process management 34 CGPIT, UTU
Issues in Designing Load Balancing Algorithms
5. Priority assignment policies:
• Selfish: Local processes are given higher priority.
• Altruistic: Remote process are given higher priority.
• Intermediate: Depends on number of local processes and the
number of remote processes.
• Local processes >= remote process
– local processes are given higher priority.
• Remote processes > local processes
– remote process are given higher priority.
Unit 6: Resource and Process management 35 CGPIT, UTU
Issues in Designing Load Balancing Algorithms
6. Migration limiting policy:
• Uncontrolled: A process may be migrated any number of
times.
• Controlled: Use a migration count parameter to fix a limit on
the number of times a process may migrate.
Unit 6: Resource and Process management 36 CGPIT, UTU
Benefits of Load Balancing
1. Load balancing improves the performance of each node and
hence the overall system performance will improve.
2. Load balancing reduces the job idle time.
3. Small jobs do not suffer from long starvation.
4. Extensibility and incremental growth.
5. Maximum utilization of resources.
6. Higher throughput.
7. Response time becomes shorter.
Unit 6: Resource and Process management 37 CGPIT, UTU
Load Sharing Approach
Several researchers believe that load balancing, with its
implication of attempting to equalize workload on all the nodes of
the system, is not a appropriate objective.
Rather, it is necessary and sufficient to prevent the nodes from
being idle while some other nodes have more than two processes.
Load sharing algorithms ensure that no node is idle or heavily
loaded.
Policies for load sharing approach are the same as load balancing
polices. The priority assignment policies and the migration
limiting policies are same as load balancing algorithms. They differ
in load estimation policy, process transfer policy, location policy
and state information exchange.
Unit 6: Resource and Process management 38 CGPIT, UTU
Issues in Designing Load Sharing Algorithms
1. Load estimation Policies: ensures that no node is idle while
process wait for service at some other node.
• CPU utilization as a load indicator.
2. Process transfer policies:
• All-or-nothing strategy.
• A node becomes a candidate for accepting a remote process
only when it has no process.
• A node becomes a candidate for transferring a process only
when it has more than one process.
• Some algorithms uses threshold value 2 instead of 1.
Unit 6: Resource and Process management 39 CGPIT, UTU
Issues in Designing Load Sharing Algorithms
3. State information exchange policies:
A node needs to know the state of other nodes only when it is
either overloaded or underloaded.
• Broadcast when state changes.
• Poll when state changes.
Unit 6: Resource and Process management 40 CGPIT, UTU
Issues in Designing Load Sharing Algorithms
4. Location policies:
The location policy decides the sender node or the receiver node of a
process that is to be moved within the system for load sharing.
The location policies are of the following types:
1. Sender-initiated policy: In which the sender node of the process
decides where to send the process.
• Node with the higher load(sender) initiate the load balancing
process.
2. Receiver-initiated policy: In which the receiver node of the
process decides from where to get the process.
• Under loaded node(receiver) initiate the load balancing
process. Receiver try to get the task from overloaded node,
sender.
Unit 6: Resource and Process management 41 CGPIT, UTU
Sender Initiated Location Policy
In the sender-initiated location policy,
heavily loaded nodes search for lightly
loaded nodes to which work may be
transferred.
In this method, when a node's load
Take
Wan e,
Ov
Please take
some work
becomes more than the threshold
pleas ceseslp
e r
a pro H
t you
l
I 'm
o
value, it either broadcasts a message or
aded
randomly probes the other nodes one
me s e
Ple
He
lp
a
by one to find a lightly loaded node that
!
can accept one or more of its processes.
If a suitable receiver node is not found,
Machine decides
the node on which the process that it has too
originated must execute that process. much work
Unit 6: Resource and Process management 42 CGPIT, UTU
Receiver Initiated Location Policy
In this policy lightly loaded nodes
search for heavily loaded nodes
from which processes can be
accepted for execution.
When the load on a node falls below
h ing
a threshold value, it broadcasts a
I hav o do
t
bored
I’m free
o
s?
en
I’m
e
t
probe message to all nodes or
l
CP call
cyc
t
U
probes nodes one by one to search
Ne J u s
He ed
?
lp?
Ne
d
for a heavily loaded node.
e
Some heavily loaded node may
transfer one of its process if such a
Machine
transfer does not reduce its load advertises its
below normal threshold. availability
Unit 6: Resource and Process management 43 CGPIT, UTU
Process Management
1. Process Migration
2. Thread
Unit 6: Resource and Process management 44 CGPIT, UTU
Process Migration
Process migration is a specialized form of process management
whereby processes are moved from one computing environment
to another.
Process migration is the relocation of a process from its current
location(the source node) to another node(the destination node).
Process migration is classified into two types:
1. Non-preemptive: Process is migrated before execution start
in source node.
2. Preemptive: Process is migrated during its execution.
Unit 6: Resource and Process management 45 CGPIT, UTU
Process Migration Mechanism
Source Destination
node node
Time
Process P1 Execution
in execution Suspended
Freezing Transfer of
Time control
Execution
resumed
Process P1 in
execution
Process migration involves the following major steps:
1. Selection of a process that should be migrated.
2. Selection of the destination node to which the selected process
should be migrated.
3. Actual transfer of the selected process to the destination node.
Unit 6: Resource and Process management 46 CGPIT, UTU
Desirable Features of a Good Process Migration Mechanism
Transparency
1. Object access level transparency: access to objects such as
files and devices can be done in a location-independent
manner.
2. System call and inter process communication level
transparency: migrated process does not continue to depend
upon its originating node.
Minimal Interference
• Minimizing the freezing time of the process being migrated.
Minimal Residual Dependencies
• Migrated process should not continue to depend on its
previous node.
Unit 6: Resource and Process management 47 CGPIT, UTU
Desirable Features of a Good Process Migration Mechanism
Efficiency
• The main sources of inefficiency are as follows:
1. The time required for migrating a process.
All these costs
2. The cost of locating an object. should be kept
minimum.
3. The cost of supporting remote execution.
Robustness
• The failure of a node other than the one on which a process is
currently running should not in any way affect the accessibility or
execution of that process.
Communication between coprocesses of a Job
• During parallel processing, coprocesses should be able to directly
communicate with each other irrespective of their locations.
Unit 6: Resource and Process management 48 CGPIT, UTU
Process Migration Mechanisms
1. Freezing the process on its source node and restarting it on its
destination node.
2. Transferring the process's address space from its source node to
its destination node.
3. Forwarding messages meant for the migrant process.
4. Handeling communication between cooperating processes that
have been separated as a result of process migration.
Unit 6: Resource and Process management 49 CGPIT, UTU
Freezing and Restarting a Process
1. Mechanisms for freezing and restarting a process:
• Immediate and delayed blocking of the process
• Fast and slow I/O operations
• Information about open files
• Reinstating the process on its destination nodes.
Unit 6: Resource and Process management 50 CGPIT, UTU
Address Space Transfer
2. Address space transfer mechanisms:
• Information to be transferred from source node to destination
node
• process state information
• Process address space
• Possible to transfer the address space without stopping its
execution.
• Not possible to resume execution until the state information is
fully transferred.
Unit 6: Resource and Process management 51 CGPIT, UTU
Address Space Transfer
Three methods for address space transfer
• Total freezing
• Pretransferring
• Transfer on reference
Unit 6: Resource and Process management 52 CGPIT, UTU
Address Space Transfer
I. Total freezing: Source Destination
node node
Execution is stopped while its address
space is being transferred.
Suspended Migration decision
Simple and easy to implements.
Process is suspended for a long time Freezing Transfer of
time address space
during long migration.
resumed
Not suitable for interactive process.
Unit 6: Resource and Process management 53 CGPIT, UTU
Address Space Transfer
II. Pretransferring: Source Destination
node node
Address space is transferred while the
process is still running on the source node.
Initial transfer of the complete address Migration decision
space followed by repeated transfers of
Transfer of
the pages modified during previous Suspended
address space
Freezing
transfer. time resumed
Remaining modified pages are transferred
after the process is frozen for transferring
its state information.
Freezing time is reduced.
Total time of migration is increased due to
the possibility of redundant page transfer.
Unit 6: Resource and Process management 54 CGPIT, UTU
Address Space Transfer
III. Transfer on reference:
A page of the address space is Source Destination
node node
transferred from its source node to
destination node only when Freezing
Suspended Migratio
n decisio
n
referenced. time resumed
Demand-driven copy-on-reference On-demand
transfer
approach.
Not efficient in terms of cost.
Imposes continued load on process’s
source node.
Results in failure if source node fails
or reboots.
Unit 6: Resource and Process management 55 CGPIT, UTU
Message Forwarding Mechanisms
3. Message forwarding mechanisms:
Three types of messages:
1. Received when the process execution is stopped on the
source node and has not restarted on the destination node.
2. Received on the source node after the execution started on
destination node.
3. Sent to the migrant process after it started execution on
destination node.
Unit 6: Resource and Process management 56 CGPIT, UTU
Message Forwarding Mechanisms
Resending Messages:
Origin
Sender Receiver
Re
Migrate
s
en
Res
d
nde
Dest 1
aga
in
Migrate again
Dest 2
Unit 6: Resource and Process management 57 CGPIT, UTU
Message Forwarding Mechanisms
Ask origin site:
Origin
Sender Receiver
Send
Migrate
Dest 1
For
wa
Migrate again
rd
Dest 2
Unit 6: Resource and Process management 58 CGPIT, UTU
Message Forwarding Mechanisms
Link traversal:
Origin
Sender Receiver
Send
Link
For
Migrate
war
Se
d
nd
Dest 1
Link Migrate again
Fo
rw
ar
Dest 2
d
Unit 6: Resource and Process management 59 CGPIT, UTU
Message Forwarding Mechanisms
Link update:
Origin
Sender Receiver
Send
New
location
Migrate
Se n d
New
location
Dest 1
Se
nd Migrate again
Cu
rre
nt
loc Dest 2
ati
on
Unit 6: Resource and Process management 60 CGPIT, UTU
Mechanisms for Handling Coprocesses
4. Mechanisms for handling coprocesses communication:
• Disallowing separation of coprocesses.
• Home node or origin site concept.
Unit 6: Resource and Process management 61 CGPIT, UTU
Advantages of Process Migration
1. Reducing average response time of processors.
2. Speeding up individual jobs by migrating to the different nodes of
the system and to execute concurrently.
3. Gaining higher throughput by using a load-balancing policy.
4. Utilizing resources effectively.
5. Reducing network traffic.
6. Improving system reliability by migrating a critical process to a
node whose reliability is higher.
7. Improving system security by migrating sensitive processes and
run on a secure node that is not directly accessible to general
users.
Unit 6: Resource and Process management 62 CGPIT, UTU
Thread
1. Motivation for using threads
2. Models for organizing threads
3. Issues in designing thread package.
Unit 6: Resource and Process management 63 CGPIT, UTU
What is Process?
Process is a program under execution.
Process is an abstraction of a running program.
Process is an instance of an executing program, including the
current values of the program counter, registers & variables.
Each process has its own virtual CPU.
Real CPU switches back and forth from process to process called
multiprogramming.
Unit 6: Resource and Process management 64 CGPIT, UTU
What is Threads?
Thread is a light weight process created by a process.
Thread is a single sequence of execution within a process.
Thread has it own.
• Program counter that keeps track of which instruction to execute
next.
• System registers which hold its current working variables.
• Stack which contains the execution history.
Processes are generally used to execute large, ‘heavyweight’ jobs
such as working in word, while threads are used to carry out smaller
or ‘lightweight’ jobs such as auto saving a word document.
A thread shares few information with its peer threads (having same
input) like code segment, data segment and open files.
Unit 6: Resource and Process management 65 CGPIT, UTU
Single Thread VS Multiple Thread
Unit 6: Resource and Process management 66 CGPIT, UTU
Similarities between Process & Thread
Like processes threads share CPU and only one thread is running
at a time.
Like processes threads within a process execute sequentially.
Like processes thread can create children.
Like a traditional process, a thread can be in any one of several
states: running, blocked, ready or terminated.
Like process threads have Program Counter, Stack, Registers and
State.
Unit 6: Resource and Process management 67 CGPIT, UTU
Dissimilarities between Process & Thread
Unlike processes threads are not independent of one another.
Threads within the same process share an address space.
Unlike processes all threads can access every address in the task.
Unlike processes threads are design to assist one other. Note that
processes might or might not assist one another because
processes may be originated from different users.
Unit 6: Resource and Process management 68 CGPIT, UTU
Types of Threads
1. Kernel Level Thread
2. User Level Thread
User
Level
Threads
Kernel
Level
Threads
Unit 6: Resource and Process management 69 CGPIT, UTU
Types of Threads
User thread are implemented by users. Kernel threads are implemented by OS.
OS doesn’t recognized user level Kernel threads are recognized by OS.
threads.
Implementation of User threads is Implementation of Kernel thread is
easy. complex.
Context switch time is less. Context switch time is more.
Context switch requires no hardware Context switch requires hardware
support. support.
If one user level thread perform If one kernel thread perform blocking
blocking operation then entire process operation then another thread with in
will be blocked. same process can continue execution.
Example : Java thread Example : Window Solaris
Unit 6: Resource and Process management 70 CGPIT, UTU
Models for Organizing Threads
Depending on the application’s needs the threads of a process of
the application can be organized in different ways.
Threads can be organized by three different models.
• Dispatcher/Worker model
• Team model
• Pipeline model
Unit 6: Resource and Process management 71 CGPIT, UTU
Dispatcher/Worker Model
In this model, the process consists of a
single dispatcher thread and multiple
worker threads.
Requests
The dispatcher thread:
• Accepts requests from clients. Port
• Examine the request.
Dispatcher Thread
• Dispatches the request to one of the free
worker threads for further processing of
Worker Worker Worker
the request. Thread Thread Thread
Each worker thread works on a different
client request.
Therefore, multiple client requests can be Server process for processing
processed in parallel. incoming requests
Unit 6: Resource and Process management 72 CGPIT, UTU
Team Model
In this model, all threads behave as
equal.
Each thread gets and processes
clients requests on its own. Requests
This model is often used for Port
implementing specialized threads
within a process. Type 1 Type 2 Type 3
Each thread of the process is
Thread Thread Thread
specialized in servicing a specific
type of requests like copy, save,
autocorrect.
Unit 6: Resource and Process management 73 CGPIT, UTU
Pipeline Model
This model is useful for applications based on the
producer-consumer model.
The output data generated by one part of the
application is used as input for another part of
the application. Requests
The threads of a process are organized as a
Port
pipeline.
The output data generated by the first thread is
used for processing by the second thread, the
Thread Thread Thread
output of the second thread is used for third
thread, and so on.
The output of the last thread in the pipeline is
the final output of the process to which the
threads belong.
Unit 6: Resource and Process management 74 CGPIT, UTU
Designing Issues in Thread Package
A system that supports thread facility must provide a set of
primitives to its users for threads-related operations.
These primitives of the system are said to form a thread package.
Some of the important issues in designing a thread package are:
• Thread Creation
• Thread Termination
• Thread Synchronization
• Thread Scheduling
• Signal Handling
Unit 6: Resource and Process management 75 CGPIT, UTU
Thread Creation
Threads can be created either statically or dynamically.
Static :
• The number of threads of a process remains fixed for its entire
lifetime.
• Memory space is allocate to each thread.
Dynamic:
• The number of threads of a process keeps changing dynamically.
• Threads are created as and when it is needed during the process
life cycle.
• It exit when task is completed.
• Here the stack size for the threads is specified as parameter to the
system call for thread creation.
Unit 6: Resource and Process management 76 CGPIT, UTU
Thread Termination
Threads may terminate or never terminate until life cycle of
process.
Thread termination can be done as follows:
• Thread destroys itself on task completion by making an EXIT
call.
• Thread is killed from outside using KILL command with thread
identifier as parameter.
In some process, all its threads are created immediately after the
process start and then these threads are never killed until the
process terminates.
Unit 6: Resource and Process management 77 CGPIT, UTU
Thread Synchronization
Since threads belongs to same process share the same address
space, thread synchronization is required to ensure that multiple
threads don’t access the same data simultaneously.
For example:
1. If two threads want to increment the same global variable
with in the same process.
2. One thread should have exclusive access to shared variable,
increment it, and then pass control to the other thread.
3. It mean that only one thread can execute in critical region at
any instance of time.
Unit 6: Resource and Process management 78 CGPIT, UTU
Thread Synchronization
Unit 6: Resource and Process management 79 CGPIT, UTU
Thread Scheduling
Another important issue in designing threads package is to decide
an appropriate scheduling algorithm.
Thread packages provide calls to give the users the flexibility to
specify the scheduling policy to be used for their applications.
Some of the special features for threads scheduling that may be
supported by a threads package are as follows:
1. Priority assignment facility
2. Flexibility to vary quantum size dynamically
3. Handoff scheduling
4. Affinity scheduling
Unit 6: Resource and Process management 80 CGPIT, UTU
Thread Scheduling
Priority assignment facility:
Simple scheduling algorithm uses FCFS, Round Robin etc.
However, a threads-scheduling scheme may provide the flexibility
to the application programmers to assign priorities.
A priority-based threads scheduling scheme may be either non-
preemptive or preemptive.
Unit 6: Resource and Process management 81 CGPIT, UTU
Thread Scheduling
Flexibility to vary quantum size dynamically.:
A simple round-robin scheduling scheme assigns a fixed-length
quantum to timeshare the CPU cycles among the threads.
However, a fixed-length quantum is not appropriate on a
multiprocessor system because there may be fewer runnable
threads than there are available processors.
Therefore, instead of using a fixed-length quantum, a scheduling
scheme may vary the size of the time quantum inversely with the
total number of threads in the system.
Unit 6: Resource and Process management 82 CGPIT, UTU
Thread Scheduling
Handoff scheduling:
A handoff scheduling scheme allows a thread to name its
successor if it wants to.
For example, after sending a message to another thread, the
sending thread can give up the CPU and request that the receiving
thread be allowed to run next.
Therefore, this scheme provides the flexibility to bypass the queue
of runnable threads and directly switch the CPU to the thread
specified by the currently running thread.
Handoff scheduling can enhance performance if it is wisely used.
Unit 6: Resource and Process management 83 CGPIT, UTU
Thread Scheduling
Affinity scheduling.:
Another scheduling policy that may be used for better
performance on a multiprocessor system is affinity scheduling.
In this scheme, a thread is scheduled on the CPU it last ran on in
hopes that part of its address space is still in that CPU's cache.
Unit 6: Resource and Process management 84 CGPIT, UTU
Signal Handling
Signals provide software-generated interrupts and exceptions.
Interrupts are externally generated disruptions of a thread or
process.
Exceptions are caused by the occurrence of unusual conditions
during a thread's execution.
The two main issues associated with handling signals in a
multithreaded environment are as follows:
1. A signal must be handled properly no matter which thread of
the process receives it.
2. Signals must be prevented from getting lost when another
signal of the same type occurs in some other thread before
the first one is handled by the thread in which it occurred.
Unit 6: Resource and Process management 85 CGPIT, UTU
Signal Handling
An approach for handling the former issue is to create a separate
exception handler thread in each process.
Exception handler thread of a process is responsible for handling
all exception conditions occurring in any thread of the process.
An approach for handling the latter issue is to assign each thread
its own private global variables for signaling exception conditions.
Unit 6: Resource and Process management 86 CGPIT, UTU
End of Unit-6
Unit 6: Resource and Process management 87 CGPIT, UTU