[go: up one dir, main page]

0% found this document useful (0 votes)
11 views91 pages

Operating System Unit II

Process management is a crucial aspect of operating systems that involves controlling the execution of processes, including their creation, scheduling, and termination. Processes have distinct characteristics and states, and various scheduling algorithms like FCFS, SJF, and Round Robin are used to manage CPU time efficiently. While process management allows for multitasking and resource isolation, it also introduces overhead and complexity, and can lead to issues like deadlocks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views91 pages

Operating System Unit II

Process management is a crucial aspect of operating systems that involves controlling the execution of processes, including their creation, scheduling, and termination. Processes have distinct characteristics and states, and various scheduling algorithms like FCFS, SJF, and Round Robin are used to manage CPU time efficiently. While process management allows for multitasking and resource isolation, it also introduces overhead and complexity, and can lead to issues like deadlocks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 91

Operating Systems

Unit-II
Process
Management
Operating Systems
Process Management
Introduction of Process Management
A process is a program in execution. For
example, when we write a program in C or C++
and compile it, the compiler creates binary code.
The original code and binary code are both
programs. When we actually run the binary
code, it becomes a process. A process is an
‘active’ entity instead of a program, which is
considered a ‘passive’ entity. A single program
can create many processes when run multiple
times; for example, when we open a .exe or
binary file multiple times, multiple instances
begin (multiple processes are created).
Operating Systems
Process Management
What is Process Management?
Process management is a key part of an operating
system. It controls how processes are carried out,
and controls how your computer runs by handling
the active processes. This includes stopping
processes, setting which processes should get
more attention, and many more. You can manage
processes on your own computer too.
The OS is responsible for managing the start, stop,
and scheduling of processes, which are programs
running on the system. The operating system uses
a number of methods to prevent deadlocks,
facilitate inter-process communication, and
synchronize processes. Efficient resource
Operating Systems
Process Management
How Does a Process Look Like in Memory?
A process in memory is divided into several
distinct sections, each serving a different
purpose. Here’s how a process typically looks in
memory:
Operating Systems
Process Management
How Does a Process Look Like in Memory?
Text Section: A Process, sometimes known as
the Text Section, also includes the current
activity represented by the value of the
Program Counter.
Stack: The stack contains temporary data, such
as function parameters, returns addresses, and
local variables.
Data Section: Contains the global variable.
Heap Section: Dynamically memory allocated
to process during its run time.
Operating Systems
Process Management
Characteristics of a Process
A process has the following attributes.
• Process Id: A unique identifier assigned by the
operating system.
• Process State: Can be ready, running, etc.
• CPU Registers: Like the Program Counter (CPU
registers must be saved and restored when a process is
swapped in and out of the CPU)
• Accounts Information: Amount of CPU used for
process execution, time limits, execution ID, etc
• I/O Status Information: For example, devices
allocated to the process, open files, etc
• CPU Scheduling Information: For example, Priority
(Different processes may have different priorities, for
example, a shorter process assigned high priority in the
Operating Systems
Process Management
States of Process
A process is in one of the following states:
• New: Newly Created Process (or) being-created
process.
• Ready: After the creation process moves to the
Ready state, i.e. the process is ready for execution.
• Running: Currently running process in CPU (only
one process at a time can be under execution in a
single processor).
• Wait (or Block): When a process requests I/O
access.
• Complete (or Terminated): The process
completed its execution.
• Suspended Ready: When the ready queue
Operating Systems
Process Management
Process Operations
Process operations in an operating system refer to
the various activities the OS performs to manage
processes. These operations include process
creation, process scheduling, execution and killing
the process. Here are the key process operations:
Operating Systems
Process Management
Process Operations
Process Creation
Process creation in an operating system (OS) is the act of
generating a new process. This new process is an instance of a
program that can execute independently.
Scheduling
Once a process is ready to run, it enters the “ready queue.” The
scheduler’s job is to pick a process from this queue and start its
execution.
Execution
Execution means the CPU starts working on the process. During
this time, the process might:
Move to a waiting queue if it needs to perform an I/O operation.
Get blocked if a higher-priority process needs the CPU.
Killing the Process
After the process finishes its tasks, the operating system ends it
Operating Systems
Process Management
Process Operations
Process Scheduling Algorithms
The operating system can use different scheduling algorithms
to schedule processes. Here are some commonly used timing
algorithms:
• First-Come, First-Served (FCFS): This is the simplest
scheduling algorithm, where the process is executed on a
first-come, first-served basis. FCFS is non-preemptive, which
means that once a process starts executing, it continues
until it is finished or waiting for I/O.
• Shortest Job First (SJF): SJF is a proactive scheduling
algorithm that selects the process with the shortest burst
time. The burst time is the time a process takes to complete
its execution. SJF minimizes the average waiting time of
processes.
Operating Systems
Process Management
Process Operations
Process Scheduling Algorithms
• Round Robin (RR): Round Robin is a proactive scheduling
algorithm that reserves a fixed amount of time in a round for
each process. If a process does not complete its execution
within the specified time, it is blocked and added to the end
of the queue. RR ensures fair distribution of CPU time to all
processes and avoids starvation.
• Priority Scheduling: This scheduling algorithm assigns
priority to each process and the process with the highest
priority is executed first. Priority can be set based on process
type, importance, or resource requirements.
 Multilevel Queue: This scheduling algorithm divides the
ready queue into several separate queues, each queue
having a different priority. Processes are queued based on
their priority, and each queue uses its own scheduling
Operating Systems
Process Management
Process Operations
Advantages of Process Management
Running Multiple Programs: Process management lets you
run multiple applications at the same time, for example, listen
to music while browsing the web.
Process Isolation: It ensures that different programs don’t
interfere with each other, so a problem in one program won’t
crash another.
Fair Resource Use: It makes sure resources like CPU time and
memory are shared fairly among programs, so even lower-
priority programs get a chance to run.
Smooth Switching: It efficiently handles switching between
programs, saving and loading their states quickly to keep the
system responsive and minimize delays.
Operating Systems
Process Management
Process Operations
Disadvantages of Process Management
Overhead: Process management uses system resources
because the OS needs to keep track of various data structures
and scheduling queues. This requires CPU time and memory,
which can affect the system’s performance.
Complexity: Designing and maintaining an OS is complicated
due to the need for complex scheduling algorithms and
resource allocation methods.
Deadlocks: To keep processes running smoothly together, the
OS uses mechanisms like semaphores and mutex locks.
However, these can lead to deadlocks, where processes get
stuck waiting for each other indefinitely.
Increased Context Switching: In multitasking systems, the
OS frequently switches between processes. Storing and loading
the state of each process (context switching) takes time and
Operating Systems
Process Management
Process scheduling
What is CPU scheduling?
CPU Scheduling is a process that allows one process
to use the CPU while another process is delayed due
to unavailability of any resources such as I / O etc,
thus making full use of the CPU. In short, CPU
scheduling decides the order and priority of the
processes to run and allocates the CPU time based
on various parameters such as CPU usage,
throughput, turnaround, waiting time, and response
time. The purpose of CPU Scheduling is to make the
system more efficient, faster, and fairer.
Operating Systems
Process Management
Process scheduling
Criteria of CPU Scheduling
1. CPU utilization
The main objective of any CPU scheduling algorithm
is to keep the CPU as busy as possible. Theoretically,
CPU utilization can range from 0 to 100 but in a
real-time system, it varies from 40 to 90 percent
depending on the load upon the system.
2. Throughput
A measure of the work done by the CPU is the
number of processes being executed and completed
per unit of time. This is called throughput. The
throughput may vary depending on the length or
duration of the processes.
Operating Systems
Process Management
Process scheduling
Criteria of CPU Scheduling
3. Turnaround Time
For a particular process, an important criterion is
how long it takes to execute that process. The time
elapsed from the time of submission of a process to
the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent
waiting to get into memory, waiting in the ready
queue, executing in CPU, and waiting for I/O.

Turn Around Time = Completion Time – Arrival Time.


Operating Systems
Process Management
Process scheduling
Criteria of CPU Scheduling
4. Waiting Time
A scheduling algorithm does not affect the time
required to complete the process once it starts
execution. It only affects the waiting time of a
process i.e. time spent by a process waiting in the
ready queue.

Waiting Time = Turnaround Time – Burst Time.


Operating Systems
Process Management
Process scheduling
Criteria of CPU Scheduling
5. Response Time
In an interactive system, turn-around time is not the
best criterion. A process may produce some output
fairly early and continue computing new results while
previous results are being output to the user. Thus
another criterion is the time taken from submission
of the process of the request until the first response
is produced. This measure is called response time.
Response Time = CPU Allocation Time(when the CPU
was allocated for the first) – Arrival Time
Operating Systems
Process Management
Process scheduling
Criteria of CPU Scheduling
6. Completion Time
The completion time is the time when the process
stops executing, which means that the process has
completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to
processes, the scheduling mechanism should favor
the higher-priority processes.
8. Predictability
A given process always should run in about the same
amount of time under a similar system load.
Operating Systems
Process Management
Process scheduling
Criteria of CPU Scheduling
Factors Influencing CPU Scheduling Algorithms
There are many factors that influence the choice of
CPU scheduling algorithm. Some of them are listed
below.
The number of processes.
The processing time required.
The urgency of tasks.
The system requirements.
Selecting the correct algorithm will ensure that the
system will use system resources efficiently, increase
productivity, and improve user satisfaction.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
1. FCFS CPU Scheduling
First come – First served (FCFS), is the simplest
scheduling algorithm. FIFO simply queues processes
according to the order they arrive in the ready
queue. In this algorithm, the process that comes first
will be executed first and next process starts only
after the previous gets fully executed.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
1. FCFS CPU Scheduling
Terminologies Used in CPU Scheduling
Arrival Time: The time at which the process arrives
in the ready queue.
Completion Time: The time at which the process
completes its execution.
Turn Around Time: Time Difference between
completion time and arrival time. Turn Around Time
= (Completion Time – Arrival Time)
Waiting Time (W. T): Time Difference between
turnaround time and burst time Ie.
Waiting Time = (Turn Around Time – Burst Time).
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
1. FCFS CPU Scheduling
In this example, we have assumed the arrival time of
all processes is 0, so turnaround and completion
times are the same.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
1. FCFS CPU Scheduling
Characteristics of FCFS
Processes are executed according to the order of
their arrival.
This algorithm has high waiting time.
The performance of this algorithm is poor due to high
waiting time.
Implementing FCFS is much easy.
Advantage of FCFS
It is the simplest scheduling algorithm for process
execution.
The implementation part is very easy.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
1. FCFS CPU Scheduling
Disadvantage of FCFS
FCFS is a non-preemptive type of algorithm. In
simple terms, the CPU executes the complete
process in once.
Due to the sequential ordering of execution the
waiting time is generally high.
Easy implementation makes its performance poor.
If a big process comes first and then small ones, the
small ones have to wait more.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
2. Shortest Job First (or SJF) CPU Scheduling
The shortest job first (SJF) or shortest job next, is a
scheduling policy that selects the waiting process
with the smallest execution time to execute next.
SJN, also known as Shortest Job Next (SJN), can be
preemptive or non-preemptive.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
2. Shortest Job First (or SJF) CPU Scheduling
Characteristics of SJF Scheduling:
• Shortest Job first has the advantage of having a minimum
average waiting time among all scheduling algorithms.
• It is a Greedy Algorithm.
• It may cause starvation if shorter processes keep coming.
This problem can be solved using the concept of ageing.
• It is practically infeasible as Operating System may not know
burst times and therefore may not sort them. While it is not
possible to predict execution time, several methods can be
used to estimate the execution time for a job, such as a
weighted average of previous execution times.
• SJF can be used in specialized environments where accurate
estimates of running time are available.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
2. Shortest Job First (or SJF) CPU Scheduling
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
2. Shortest Job First (or SJF) CPU Scheduling

How to compute below times in SJF using a


program?
Completion Time: Time at which process completes
its execution.
Turn Around Time: Time Difference between
completion time and arrival time.
Turn Around Time = Completion Time – Arrival
Time
Waiting Time(W.T): Time Difference between turn
around time and burst time.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
2. Shortest Job First (or SJF) CPU Scheduling

How to compute below times in SJF using a


program?
Completion Time: Time at which process completes
its execution.
Turn Around Time: Time Difference between
completion time and arrival time.
Turn Around Time = Completion Time – Arrival
Time
Waiting Time(W.T): Time Difference between turn
around time and burst time.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
3. Longest Job First(LJF)
Longest Job First(LJF) scheduling process is just
opposite of shortest job first (SJF), as the name
suggests this algorithm is based upon the fact that
the process with the largest burst time is processed
first. Longest Job First is non-preemptive in nature.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
3. Longest Job First(LJF)
Characteristics of LJF
• Among all the processes waiting in a waiting queue,
CPU is always assigned to the process having largest
burst time.
• If two processes have the same burst time then the
tie is broken using FCFS i.e. the process that arrived
first is processed first.
• LJF CPU Scheduling can be of both preemptive and
non-preemptive types.
Advantages of LJF
• No other task can schedule until the longest job or
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
4.Round Robin CPU Scheduling
Round Robin is a CPU scheduling mechanism those
cycles around assigning each task a specific time slot.
It is the First come, First served CPU Scheduling
technique with preemptive mode. The Round Robin CPU
algorithm frequently emphasizes the Time-Sharing
method.
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
4.Round Robin CPU Scheduling
Round Robin CPU Scheduling Algorithm
characteristics include:
• Because all processes receive a balanced CPU
allocation, it is straightforward, simple to use, and
starvation-free.
• One of the most used techniques for CPU core
scheduling. Because the processes are only allowed
access to the CPU for a brief period of time, it is seen
as preemptive.
The benefits of round robin CPU Scheduling:
• Every process receives an equal amount of CPU time,
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
4.Round Robin CPU Scheduling
Examples:
Problem
Process ID Arrival Time Burst Time

P0 1 3

P1 0 5

P2 3 2

P3 4 3

P4 2 1
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
4.Round Robin CPU Scheduling
Examples:
Solution
Process ID Arrival Burst Time Completion Turn Waiting
Time Time Around Time
Time

P0 1 3 5 4 1
P1 0 5 14 14 9
P2 3 2 7 4 2
P3 4 3 10 6 3
P4 2 1 3 1 0
Operating Systems
Process Management
Process scheduling
Scheduling algorithms
4.Round Robin CPU Scheduling
Examples:
Solution
Gantt Chart:

CPU Scheduling Algorithms in Operating Systems


Average Completion Time = 7.8

Average Turn Around Time = 4.8

Average Waiting Time = 3


Operating Systems
Process Management
Multiple Processors Scheduling in Operating System
Multiple processor scheduling: ultiprocessor scheduling
focuses on designing the system's scheduling function, which
consists of more than one processor. Multiple CPUs share the load
(load sharing) in multiprocessor scheduling so that various
processes run simultaneously. In general, multiprocessor
scheduling is complex as compared to single processor scheduling.
In the multiprocessor scheduling, there are many processors, and
they are identical, and we can run any process at any time.
The multiple CPUs in the system are in close communication,
which shares a common bus, memory, and other peripheral
devices. So we can say that the system is tightly coupled. These
systems are used when we want to process a bulk amount of data,
and these systems are mainly used in satellite, weather
forecasting, etc.
Operating Systems
Process Management
Multiple Processors Scheduling in Operating System
Approaches to Multiple Processor Scheduling
There are two approaches to multiple processor scheduling in the
operating system: Symmetric Multiprocessing and Asymmetric
Multiprocessing.
Operating Systems
Process Management
Multiple Processors Scheduling in Operating System
Approaches to Multiple Processor Scheduling
• Symmetric Multiprocessing: It is used where each processor
is self-scheduling. All processes may be in a common ready
queue, or each processor may have its private queue for ready
processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and
select a process to execute.
• Asymmetric Multiprocessing: It is used when all the
scheduling decisions and I/O processing are handled by a single
processor called the Master Server. The other processors
execute only the user code. This is simple and reduces the
need for data sharing, and this entire scenario is called
Asymmetric Multiprocessing.
.
Operating Systems
Process Management
Multiple Processors Scheduling in Operating System
Processor Affinity
Processor Affinity means a process has an affinity for the
processor on which it is currently running. When a process runs on
a specific processor, there are certain effects on the cache
memory. The data most recently accessed by the process populate
the cache for the processor. As a result, successive memory
access by the process is often satisfied in the cache memory.
Operating Systems
Process Management
Operation on a Process
The execution of a process is a complex activity. It involves various
operations. Following are the operations that are performed while
execution of a process:
Operating Systems
Process Management
Multiple Processors Scheduling in Operating System
Load Balancing
Load Balancing is the phenomenon that keeps the workload evenly
distributed across all processors in an SMP system. Load balancing
is necessary only on systems where each processor has its own
private queue of a process that is eligible to execute.
Load balancing is unnecessary because it immediately extracts a
runnable process from the common run queue once a processor
becomes idle. On SMP (symmetric multiprocessing), it is important
to keep the workload balanced among all processors to utilize the
benefits of having more than one processor fully. One or more
processors will sit idle while other processors have high workloads
along with lists of processors awaiting the CPU. There are two
general approaches to load balancing:
Operating Systems
Process Management
Multiple Processors Scheduling in Operating System
Load Balancing

• Push Migration: In push migration, a task routinely checks the


load on each processor. If it finds an imbalance, it evenly
distributes the load on each processor by moving the processes
from overloaded to idle or less busy processors.
• Pull Migration:Pull Migration occurs when an idle processor
pulls a waiting task from a busy processor for its execution.
Operating Systems
Process Management
Operation on a Process
The execution of a process is a complex activity. It involves various
operations. Following are the operations that are performed while
execution of a process:
Operating Systems
Process Management
Operation on a Process
1. Creation
This is the initial step of the process execution activity. Process
creation means the construction of a new process for execution.
This might be performed by the system, the user, or the old
process itself. There are several events that lead to the process
creation. Some of the such events are the following:
• When we start the computer, the system creates several
background processes.
• A user may request to create a new process.
• A process can create a new process itself while executing.
• The batch system takes initiation of a batch job.
Operating Systems
Process Management
Operation on a Process

2. Scheduling/Dispatching
The event or activity in which the state of the process is changed
from ready to run. It means the operating system puts the process
from the ready state into the running state. Dispatching is done by
the operating system when the resources are free or the process
has higher priority than the ongoing process. There are various
other cases in which the process in the running state is preempted
and the process in the ready state is dispatched by the
operating system.
Operating Systems
Process Management
Operation on a Process

3. Blocking
When a process invokes an input-output system call that blocks
the process, and operating system is put in block mode. Block
mode is basically a mode where the process waits for input-output.
Hence on the demand of the process itself, the operating system
blocks the process and dispatches another process to the
processor. Hence, in process-blocking operations, the operating
system puts the process in a ‘waiting’ state.
Operating Systems
Process Management
Operation on a Process

4. Preemption
When a timeout occurs that means the process hadn’t been
terminated in the allotted time interval and the next process is
ready to execute, then the operating system preempts the
process. This operation is only valid where CPU scheduling
supports preemption. Basically, this happens in priority scheduling
where on the incoming of high priority process the ongoing
process is preempted. Hence, in process preemption operation,
the operating system puts the process in a ‘ready’ state.
Operating Systems
Process Management
Operation on a Process

5. Process Termination
Process termination is the activity of ending the process. In other
words, process termination is the relaxation of computer resources
taken by the process for the execution. Like creation, in
termination also there may be several events that may lead to the
process of termination. Some of them are:
• The process completes its execution fully and it indicates to the
OS that it has finished.
• The operating system itself terminates the process due to
service errors.
• There may be a problem in hardware that terminates the
process.
Operating Systems
Process Management
Inter Process Communication (IPC)
Processes can coordinate and interact with one another using a
method called inter-process communication (IPC) . Through
facilitating process collaboration, it significantly contributes to
improving the effectiveness, modularity, and ease of software
systems
Types of Process
• Independent process
• Co-operating process
An independent process is not affected by the execution of other
processes while a co-operating process can be affected by other
executing processes. Though one can think that those processes,
which are running independently, will execute very efficiently, in
reality, there are many situations when cooperative nature can be
utilized for increasing computational speed, convenience, and
modularity. Inter-process communication (IPC) is a mechanism that
Operating Systems
Process Management
Inter Process Communication (IPC)
The communication between these processes can be seen as a
method of cooperation between them. Processes can
communicate with each other through both:
Methods of IPC
• Shared Memory
• Message Passing
Operating Systems
Process Management
Inter Process Communication (IPC)
An operating system can implement both methods of
communication. First, we will discuss the shared memory methods
of communication and then message passing. Communication
between processes using shared memory requires processes to
share some variable, and it completely depends on how the
programmer will implement it. One way of communication using
shared memory can be imagined like this: Suppose process1 and
process2 are executing simultaneously, and they share some
resources or use some information from another process. Process1
generates information about certain computations or resources
being used and keeps it as a record in shared memory. When
process2 needs to use the shared information, it will check in the
record stored in shared memory and take note of the information
generated by process1 and act accordingly.
Operating Systems
Process Management
Inter Process Communication (IPC)

Processes can use shared memory for extracting


information as a record from another process as well as
for delivering any specific information to other
processes.
Operating Systems
Process Management
Inter Process Communication (IPC)
i) Shared Memory Method
Ex: Producer-Consumer problem
There are two processes: Producer and Consumer . The
producer produces some items and the Consumer
consumes that item. The two processes share a
common space or memory location known as a buffer
where the item produced by the Producer is stored and
from which the Consumer consumes the item if
needed. There are two versions of this problem: the
first one is known as the unbounded buffer problem in
which the Producer can keep on producing items and
there is no limit on the size of the buffer, the second
one is known as the bounded buffer problem in which
Operating Systems
Process Management
Inter Process Communication (IPC)
ii) Messaging Passing Method

Now, We will start our discussion of the communication


between processes via message passing. In this
method, processes communicate with each other
without using any kind of shared memory. If two
processes p1 and p2 want to communicate with each
other, they proceed as follows:
Establish a communication link (if a link already exists,
no need to establish it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
– send (message, destination) or send (message)
Operating Systems
Process Management
Inter Process Communication (IPC)
ii) Messaging Passing Method
Operating Systems
Process Management
Inter Process Communication (IPC)
ii) Messaging Passing Method

The message size can be of fixed size or of variable


size. If it is of fixed size, it is easy for an OS designer
but complicated for a programmer and if it is of
variable size then it is easy for a programmer but
complicated for the OS designer. A standard message
can have two parts: header and body. The header
part is used for storing message type, destination id,
source id, message length, and control information.
Operating Systems
Process Management
Inter Process Communication (IPC)
Message Passing Through Communication Link
Direct and Indirect Communication link
Now, We will start our discussion about the methods of
implementing communication links. While
implementing the link, there are some questions that
need to be kept in mind like :

• How are links established?


• Can a link be associated with more than two
processes?
• How many links can there be between every pair of
communicating processes?
• What is the capacity of a link? Is the size of a
Operating Systems
Process Management
Inter Process Communication (IPC)
Message Passing Through Communication Link
Direct and Indirect Communication link
A link has some capacity that determines the number
of messages that can reside in it temporarily for which
every link has a queue associated with it which can be
of zero capacity, bounded capacity, or unbounded
capacity. In zero capacity, the sender waits until the
receiver informs the sender that it has received the
message.
Operating Systems
Process Management
Inter Process Communication (IPC)
Message Passing Through Communication Link
Direct and Indirect Communication link

Direct Communication links are implemented when


the processes use a specific process identifier for the
communication, but it is hard to identify the sender
ahead of time.
For example the print server.
In-direct Communication is done via a shared
mailbox (port), which consists of a queue of messages.
The sender keeps the message in mailbox and the
receiver picks them up.
Operating Systems
Process Management
Inter Process Communication (IPC)
Synchronous and Asynchronous Message Passing
A process that is blocked is one that is waiting for some
event, such as a resource becoming available or the
completion of an I/O operation. IPC is possible between
the processes on same computer as well as on the
processes running on different computer i.e. in
networked/distributed system. In both cases, the
process may or may not be blocked while sending a
message or attempting to receive a message so
message passing may be blocking or non-blocking.
Operating Systems
Process Management
Inter Process Communication (IPC)
Synchronous and Asynchronous Message Passing
Blocking is considered synchronous and blocking
send means the sender will be blocked until the
message is received by receiver. Similarly, blocking
receive has the receiver block until a message is
available. Non-blocking is considered asynchronous
and Non-blocking send has the sender sends the
message and continue. Similarly, Non-blocking receive
has the receiver receive a valid message or null
Operating Systems
Process Management
Inter Process Communication (IPC)
Synchronous and Asynchronous Message Passing
Blocking is considered synchronous and blocking
send means the sender will be blocked until the
message is received by receiver. Similarly, blocking
receive has the receiver block until a message is
available. Non-blocking is considered asynchronous
and Non-blocking send has the sender sends the
message and continue. Similarly, Non-blocking receive
has the receiver receive a valid message or null.
There are basically three preferred combinations:

• Blocking send and blocking receive


• Non-blocking send and Non-blocking receive
Operating Systems
Process Management
Inter Process Communication (IPC)
Role of Synchronization in IPC
In IPC, synchronization is essential for controlling access to shared
resources and guaranteeing that processes do not conflict with
one another. Data consistency is ensured and problems like race
situations are avoided with proper synchronization.

Advantages of IPC
• Enables processes to communicate with each other and share
resources, leading to increased efficiency and flexibility.
• Facilitates coordination between multiple processes, leading to
better overall system performance.
• Allows for the creation of distributed systems that can span
multiple computers or networks.
• Can be used to implement various synchronization and
communication protocols, such as semaphores, pipes, and
Operating Systems
Process Management
Inter Process Communication (IPC)
Disadvantages of IPC
• Increases system complexity, making it harder to design,
implement, and debug.
• Can introduce security vulnerabilities, as processes may be able
to access or modify data belonging to other processes.
• Requires careful management of system resources, such as
memory and CPU time, to ensure that IPC operations do not
degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to
access or modify the same data at the same time.
• Overall, the advantages of IPC outweigh the disadvantages, as
it is a necessary mechanism for modern operating systems and
enables processes to work together and share resources in a
flexible and efficient manner. However, care must be taken to
design and implement IPC systems carefully, in order to avoid
Operating Systems
Process Management
Thread Scheduling:
There is a component in Java that basically decides
which thread should execute or get a resource in the
operating system.
Scheduling of threads involves two boundary
scheduling.

• Scheduling of user-level threads (ULT) to kernel-level


threads (KLT) via lightweight process (LWP) by the
application developer.
• Scheduling of kernel-level threads by the system
scheduler to perform different unique OS functions.
Operating Systems
Process Management
Thread Scheduling:
Lightweight Process (LWP)
Light-weight process are threads in the user space that acts as an
interface for the ULT to access the physical CPU resources. Thread
library schedules which thread of a process to run on which LWP
and how long. The number of LWPs created by the thread library
depends on the type of application. In the case of an I/O bound
application, the number of LWPs depends on the number of user-
level threads.
This is because when an LWP is blocked on an I/O operation, then
to invoke the other ULT the thread library needs to create and
schedule another LWP. Thus, in an I/O bound application, the
number of LWP is equal to the number of the ULT. In the case of a
CPU-bound application, it depends only on the application. Each
LWP is attached to a separate kernel-level thread.
Operating Systems
Process Management
Thread Scheduling:
Lightweight Process (LWP)

In real-time, the first boundary of thread


scheduling is beyond specifying the
scheduling policy and the priority.
It requires two controls to be specified
for the User level threads: Contention
scope, and Allocation domain.
These are explained as following below.
Operating Systems
Process Management
Thread Scheduling:
Lightweight Process (LWP)

Contention Scope
The word contention here refers to the competition or
fight among the User level threads to access the kernel
resources. Thus, this control defines the extent to
which contention takes place. It is defined by the
application developer using the thread library.
Operating Systems
Process Management
Thread Scheduling:
Lightweight Process (LWP)

Depending upon the extent of contention it is classified as-


• Process Contention Scope (PCS) :
The contention takes place among threads within a same
process. The thread library schedules the high-prioritized PCS
thread to access the resources via available LWPs (priority as
specified by the application developer during thread creation).
• System Contention Scope (SCS) :
The contention takes place among all threads in the system.
In this case, every SCS thread is associated to each LWP by the
thread library and are scheduled by the system scheduler to
access the kernel resources. In LINUX and UNIX operating
systems, the POSIX Pthread library provides a
function Pthread_attr_setscope to define the type of contention
Operating Systems
Process Management
Multithreading Models:
Multithreading allows the application to divide its task
into individual threads. In multi-threads, the same
process or task can be done by the number of threads,
or we can say that there is more than one thread to
perform the task in multithreading. With the use of
multithreading, multitasking can be achieved.
Operating Systems
Process Management
Multithreading Models: The main drawback of single
threading systems is that only one task can be performed at a
time, so to overcome the drawback of this single threading, there
is multithreading that allows multiple tasks to be performed.
Operating Systems
Process Management
Multithreading Models:

There exists three established multithreading


models classifying these relationships are:
• Many to one multithreading model
• One to one multithreading model
• Many to Many multithreading models
Operating Systems
Process Management
Multithreading Models:
Many to one multithreading model:
The many to one model maps many user levels threads
to one kernel thread. This type of relationship facilitates
an effective context-switching environment, easily
implemented even on the simple kernel with no thread
support.
Operating Systems
Process Management
Multithreading Models:
One to one multithreading model
The one-to-one model maps a single
user-level thread to a single kernel-level thread.
This type of relationship facilitates the running
of multiple threads in parallel.
However, this benefit comes with its drawback.
Operating Systems
Process Management
Multithreading Models:
Many to Many Model multithreading model
In this type of model, there are several user-level
threads and several kernel-level threads. The number
of kernel threads created depends upon a particular
application. The developer can create as many threads
at both levels but may not be the same. The many to
many model is a compromise between the other two
models. In this model, if any thread makes a blocking
system call, the kernel can schedule another thread for
execution.
Also, with the introduction of multiple threads,
complexity is not present as in the previous models.
Though this model allows the creation of multiple
Operating Systems
Process Management
Multithreading Models:
Many to Many Model multithreading model
Operating Systems
Process Management
Multithreading Models:
Many to Many Model multithreading model
Operating Systems
Process Management
Thread Libraries
Threads in Operating Systems
A thread is a path of execution that is composed of
a program counter, thread id, stack and set of
registers within the process. In general, a thread is a
least unit of the process that represents an
independent sequence of execution within that process.
It is a basic unit of CPU utilization that makes
communication more effective and efficient, enables
the utilization of multiprocessor architectures to a great
scale and greater efficiency, and reduces the time
required in context switching.
Sometimes thread also called a lightweight
processes because they have their own stack but can
Operating Systems
Process Management
Thread Libraries
Thread Libraries
• Thread library is a thread API (Application
Programming Interface) which is a set of functions,
methods and routine provided by operating system
for creating, managing and coordination of the
threads.
• Thread libraries may be implemented either
in user space or kernel space library.
• If the thread library is implemented at the userspace
then the code and information of thread library
would be reside in user space, In this scenario
invoking any function from thread library would be
simple function call and can’t be a system call.
Operating Systems
Process Management
Thread Libraries
Thread Libraries
• But if the thread library is implemented at the
kernelspace then the code and information of thread
library would be reside in kernel space and supported
by operating system, In this scenerio invoking any
function from thread library would be system call to
the kernel.
• The former API involves functions implemented solely
within user space, with no kernel support. But latter
involves system calls, and requires a kernel with
thread library support as mentioned in last previous
point. These libraries provide a tools to the
programmer for efficient management, creation and
Operating Systems
Process Management
Thread Libraries
Need of Thread Library
• Thread Library allow us to perform or execute
multiple task at the same time, with the help of this
functionality we can utilize our CPU and hardware
and also improve our performance.
• Even in andorid development their is a concept
called Kotlin Coroutines which also work on same
concept for performing multiple task at the same
time with the help of thread library (dependency) for
efficient execution of tasks.
Operating Systems
Process Management
Thread Libraries
Need of Thread Library
• Thread libraries provide standard way to work with
thread over various operating systems and platforms.
• When multiple threads are working together they
need data of another threads to performs
operations , so in thread library it provides
mechanisms like semaphores, mutexes that allow to
share data between threads without data
stealing/loss.
• They have ability to create new thread within the
applications and execute separated by thread.
Operating Systems
Process Management
Thread Libraries
Best Thread Libraries
Java Threads
• Thread is a primary model of program execution in java
program and java language and it’s API provides a rich variety
of features for the creation and the management of threads. As
name signifies that it’s code written in java language.
• However in most instances the the JVM (Java Virtual
Machine) is running on top of host operating system, the java
thread API typically implemented using thread library available
on the host system, Which signifies that on window system the
java thread is implemented through Win32 API.
• It provide built-in support for multi-threading
through java.lang.Thread and it also provide high level thread
management.
Operating Systems
Process Management
Thread Libraries
Best Thread Libraries
Pthread
• It is also known as POSIX thread, it is an execution model that
exists independently from a programming language as well as a
parallel model.
• Pthread library can be implemented either at the
userspace or kernel space. The Pthread is often implemented
at the Linux, unix and solaris, and it is highly portable as its
code written in pthread can typically be compiled and run on
different unix without much modifications.
• Pthread program always have pthread.h header file. Windows
doesn’t support pthread standard. It support C and C++
languages.
• Pthread are use to leverage the energy of multiple processors,
Operating Systems
Process Management
Thread Libraries
Best Thread Libraries
Win32 Thread
• Win32 thread is a part of Windows operating system and it is
also called as Windows Thread. It is a kernel space library.
• In this thread we can also achieve parallelism and
concurrency in same manner as in pthread.
• Win32 thread are created with the help
of createThread() function. Window thread support
Thread Local Storage (TLS) as allow each thread to have its own
unique data, and these threads can easily share data as they
declared globally.
• They providing native and low level support for multi-threading.
It means they are tightly integrated with window OS and offer
efficient creation and thread management.
Operating Systems
Process Management
Thread Libraries
Threading Issues in OS
•System Call
•Thread Cancellation
•Signal Handling
•Thread Pool
•Thread Specific Data
Operating Systems
Process Management
Thread Libraries
Threading Issues in OS
•System Call
•Thread Cancellation
•Signal Handling
•Thread Pool
•Thread Specific Data
Operating Systems
Process Management
Thread Libraries
Threading Issues in OS
•System Call
•Thread Cancellation
•Signal Handling
•Thread Pool
•Thread Specific Data
Keep Learning

Yo u !
a n k
Th

You might also like