Module-II: Process Management
Process Management
Process Concept
Process Scheduling
Operations on Processes
Interprocess Communication
Examples of IPC Systems
2
Objectives
To introduce the notion of a process -- a program in execution, which forms
the basis of all computation
To describe the various features of processes, including scheduling,
creation and termination, and communication
To explore interprocess communication using shared memory and message
passing
To introduce the notion of a thread—a fundamental unit of CPU utilization
that forms the basis of multithreaded computer systems.
To introduce the critical-section problem, whose solutions can be used to
ensure the consistency of shared data
To present both software and hardware solutions of the critical-section
problem
To examine several classical process-synchronization problems
3
Objectives
To introduce CPU scheduling, which is the basis for multiprogrammed
operating systems
To describe various CPU-scheduling algorithms
To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a
particular system
To develop a description of deadlocks, which prevent sets of concurrent
processes from completing their tasks.
To present a number of different methods for preventing or avoiding
deadlocks in a computer system
4
Process Concept
An operating system executes a variety of programs:
Process – a program in execution.
Process has Multiple sections, such as ;
The program code, also called text section
Current activity/Execution State
program counter
processor registers
Stack containing temporary data
Function parameters, return addresses, local variables
Data section containing global variables
Heap containing memory dynamically allocated during run time
5
Process in Memory
6
Program vs Process?
7
Process State
As a process executes, it changes state
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
8
Process States-Real time Scenario?
Online Food Ordering System?
9
Process Control Block (PCB)
Information associated with each process
(also called task control block)
Process state – running, waiting, etc
Program counter – location of instruction to
execute next
CPU registers – contents of all process-centric
registers
CPU scheduling information- priorities,
scheduling queue pointers
Memory-management information – memory
allocated to the process
Accounting information – CPU used, clock time
elapsed since start, time limits
I/O status information – I/O devices allocated to
process, list of open files
10
PCB Follow a Specific Order?
It varies with the OS but it enables OS for;
1. Fast Access
2. Easy Context Switch
A typical Order would be; (It Varies but most follow)
1. Process Identification Information
2. Process State Information (state, pc, register)
3. Memory
4. File Information
11
CPU Switch From Process to Process
12
Process Scheduling
Maximize CPU use, quickly switch processes onto CPU for time sharing
Process scheduler selects among available processes for next execution
on CPU
Maintains scheduling queues of processes
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, ready
and waiting to execute
Device queues – set of processes waiting for an I/O device
Processes migrate among the various queues
13
Ready Queue And Various I/O Device Queues
Each queue has a head and a tail, managing the insertion and removal of
PCBs.
Processes move between queues as their state changes (e.g., from waiting
for I/O to ready for execution).
14
Representation of Process Scheduling
Queueing diagram represents queues, resources, flows
15
Schedulers
Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be
fast)
Long-term scheduler (or job scheduler) – selects which processes should
be brought into the ready queue
Long-term scheduler is invoked infrequently (seconds, minutes) ⇒ (may
be slow)
The long-term scheduler controls the degree of multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
CPU-bound process – spends more time doing computations; few very
long CPU bursts
Long-term scheduler strives for good process mix
16
Medium Term Scheduling
The time-sharing systems can use the Medium-term scheduler (MTS) will
help to reduce the degree of multi-programming.
The MTS is responsible for temporarily removing (swapping out) processes
from memory (RAM) to secondary storage (disk) and bringing them back
when needed (using swapping).
17
Medium Term Scheduling-Real Time Scenario?
Scenario: Running Multiple Applications on a PC
You have opened Google Chrome, Photoshop, and a Game.
Your RAM is full, and the system is slowing down.
The OS swaps out Photoshop (since it's idle) to disk storage.
This frees up RAM for the game to run smoothly.
When you switch back to Photoshop, the MTS swaps it back into RAM.
Key Differences between Schedulers?
18
Context Switch
When CPU switches to another process, the system must save the state of the
old process and load the saved state for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is overhead; the system does no useful work while
switching
The more complex the OS and the PCB ➔ the longer the context switch
Time dependent on hardware support
Some hardware provides multiple sets of registers per CPU ➔ multiple
contexts loaded at once
19
Operations on Processes
System must provide mechanisms for:
Process creation ?
Process termination ?
20
Process Creation
Parent process create children processes, which, in turn create other
processes, forming a tree of processes
Generally, process identified and managed via a process identifier (pid)
Resource sharing options
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Execution options
Parent and children execute concurrently
Parent waits until children terminate
21
A Tree of Processes in Linux
22
Process Creation (Cont.)
Address space
Child duplicate of parent
Child has a program loaded into it
UNIX examples
fork() system call creates new process
exec() system call used after a fork() to replace the process’
memory space with a new program
23
C Program Forking Separate Process
Note:
1. fork() return value in parent → child’s PID (> 0)
2. fork() return value in child → 0
3. Actual PID of child process → Always > 0 (from getpid())
24
Process Termination
Process executes last statement and then asks the operating system to
delete it using the exit() system call.
Returns status data from child to parent (via wait())
Process’ resources are deallocated by operating system
Parent may terminate the execution of children processes using the
abort() system call. Some reasons for doing so:
Child has exceeded allocated resources
Task assigned to child is no longer required
The parent is exiting and the operating systems does not allow a child
to continue if its parent terminates
25
Process Termination
Some operating systems do not allow child to exists if its parent has terminated. If a
process terminates, then all its children must also be terminated.
cascading termination. All children, grandchildren, etc. are terminated.
The termination is initiated by the operating system.
The parent process may wait for termination of a child process by using the
wait()system call. The call returns status information and the pid of the terminated
process
pid = wait(&status);
If no parent waiting (did not invoke wait()) process is a zombie
If parent terminated without invoking wait , process is an orphan
An orphan process is a child process whose parent has exited before it finishes
The orphan process is NOT removed; instead, the init process (PID 1) adopts it
26
Process Termination-Zombie Process?
Note:
1. The child process starts and executes
exit(0); meaning it completes execution.
2. However, the OS does not immediately
remove the child process from the
process table.
3. The child’s exit status remains in the
process table because the parent has not
called wait() to collect it.
4. Until the parent reads the child's exit
status, the child stays as a zombie
process.
5. A zombie does not use CPU/memory but
clutters the process table
27
Interprocess Communication
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes, including sharing data
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
Cooperating processes need interprocess communication (IPC)
Two models of IPC
Shared memory
Message passing
28
Communication Models
(a) Message passing. (b) shared memory.
29
Cooperating Processes
Independent process cannot affect or be affected by the execution of
another process
Cooperating process can affect or be affected by the execution of another
process
Advantages of process cooperation
Information sharing
Computation speed-up
Modularity
Convenience
30
Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process
unbounded-buffer places no practical limit on the size of the
buffer
bounded-buffer assumes that there is a fixed buffer size
Real-World Example: FOOD-DELIVERY
Restaurant (Producer): Cooks food and places orders into
the system (buffer).
Order Queue (Buffer): Stores pending food orders, waiting
for delivery.
Delivery Partner (Consumer): Picks up food orders and
delivers them.
31
Bounded-Buffer – Shared-Memory Solution
Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
32
Bounded-Buffer – Producer
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
33
Bounded Buffer – Consumer
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
/* consume the item in next consumed */
}
34
Interprocess Communication – Shared Memory
An area of memory shared among the processes that wish to communicate
The communication is under the control of the users processes not the
operating system.
Major issues is to provide mechanism that will allow the user processes to
synchronize their actions when they access shared memory.
35
Interprocess Communication – Message Passing
Mechanism for processes to communicate and to synchronize their actions
Message system – processes communicate with each other without
resorting to shared variables
IPC facility provides two operations:
send(message)
receive(message)
The message size is either fixed or variable
36
Message Passing (Cont.)
If processes P and Q wish to communicate, they need to:
Establish a communication link between them
Exchange messages via send/receive
37
Message Passing (Cont.)
Implementation of communication link
Physical:
Shared memory
Hardware bus
Network
Logical:
Direct or indirect
Synchronous or asynchronous
Automatic or explicit buffering
38
Direct Communication
Naming
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of communication link
Links are established automatically
A link is associated with exactly one pair of communicating processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-directional
39
Indirect Communication
Messages are directed and received from mailboxes (also referred to as
ports)
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes
Each pair of processes may share several communication links
Link may be unidirectional or bi-directional
40
Indirect Communication
Operations
create a new mailbox (port)
send and receive messages through mailbox
destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
41
Indirect Communication
Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive operation
Allow the system to select arbitrarily the receiver. Sender is notified
who the receiver was.
42
Synchronization
Synchronisation ensures that multiple processes or threads coordinate
access to shared resources without conflicts
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
Blocking send -- the sender is blocked until the message is received
Blocking receive -- the receiver is blocked until a message is available
Non-blocking is considered asynchronous
Non-blocking send -- the sender sends the message and continue
Non-blocking receive -- the receiver receives:
A valid message, or
Null message
Different combinations possible
If both send and receive are blocking, we have a rendezvous
43
Buffering
Messages exchanged by communicating processes reside in a temporary
queue implemented in one of three ways
Zero capacity – no messages are queued on a link. Sender must wait for
receiver (rendezvous)
Bounded capacity – finite length of n messages Sender must wait if link
full
Unbounded capacity – infinite length Sender never waits
44
Pipes
Acts as a conduit allowing two processes to communicate
Issues:
Is communication unidirectional or bidirectional?
In the case of two-way communication, is it half or full-duplex?
Must there exist a relationship (i.e., parent-child) between the
communicating processes?
Can the pipes be used over a network?
Ordinary pipes – cannot be accessed from outside the process that
created it. Typically, a parent process creates a pipe and uses it to
communicate with a child process that it created.
Named pipes – can be accessed without a parent-child relationship.
45
Ordinary Pipes
Ordinary Pipes allow communication in standard producer-consumer style
Producer writes to one end (the write-end of the pipe)
Consumer reads from the other end (the read-end of the pipe)
Ordinary pipes are therefore unidirectional
Require parent-child relationship between communicating processes
Windows calls these anonymous pipes
46
Named Pipes
Named Pipes are more powerful than ordinary pipes
Communication is bidirectional
No parent-child relationship is necessary between the communicating
processes
Several processes can use the named pipe for communication
Provided on both UNIX and Windows systems
47
Multicore Programming
Multicore or multiprocessor systems putting pressure on programmers,
challenges include:
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
Parallelism implies a system can perform more than one task
simultaneously
Concurrency supports more than one task making progress
Single processor / core, scheduler providing concurrency
48
Multicore Programming (Cont.)
Types of parallelism
Data parallelism – distributes subsets of the same data across
multiple cores, same operation on each
Task parallelism – distributing threads across cores, each thread
performing unique operation
As number of threads grows, so does architectural support for threading
CPUs have cores as well as hardware threads
Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per
core
49
Concurrency vs. Parallelism
Concurrent execution on single-core system
Concurrency = The ability to handle multiple tasks at the same time (but
not necessarily executing them at the same time).
Parallelism on a multi-core system:
Parallelism = Multiple tasks running simultaneously on different CPU
cores.
50
Concurrency vs. Parallelism
51
Amdahl’s Law
Identifies performance gains from adding additional cores to an application
that has both serial and parallel components
S is serial portion
N processing cores
That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores
results in speedup of 1.6 times
52
Multithreaded Server Architecture
53
Benefits of multithreaded programming
Responsiveness – may allow continued execution if part of process
is blocked, especially important for user interfaces
Resource Sharing – threads share resources of process, easier
than shared memory or message passing
Economy – cheaper than process creation, thread switching lower
overhead than context switching
Scalability – process can take advantage of multiprocessor
architectures
54
Single and Multithreaded Processes
A thread is a basic unit of CPU utilization.
If a process has multiple threads of control, it can perform more than one
task at a time.
It shares its code section, data section, and other operating-system
resources, such as open files and signals with the other threads
55
Multithreading Scenario-Play Game?
The main game engine acts as the main thread or process that
coordinates everything.
56
User Threads and Kernel Threads
User threads - Created and managed by user-space thread libraries.
The operating system kernel is unaware of these threads, and they are
scheduled in user space by the thread library.
Faster to create and switch (no system calls required).
If one thread blocks (e.g., waiting for I/O), the entire process may get
blocked.
Three primary thread libraries:
POSIX Pthreads
Windows threads
Java threads
Who creates them? The application itself using a thread library.
57
User Threads and Kernel Threads
Kernel threads - Supported by the Kernel
Each kernel thread is recognised by the OS scheduler.
Can take advantage of multi-core processors (true parallelism).
If one thread blocks, other threads in the process can still run.
Examples – virtually all general purpose operating systems, including:
Windows
Solaris
Linux
Tru64 UNIX
Mac OS X
Who creates them? The operating system kernel, typically via system
calls like pthread_create() in Linux.
58
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
59
Many-to-One
Many user-level threads mapped to single
kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on
muticore system because only one may be
in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
60
One-to-One
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted
due to overhead
Examples
Windows
Linux
Solaris 9 and later
61
Many-to-Many Model
Allows many user level threads to be
mapped to many kernel threads
Allows the operating system to create a
sufficient number of kernel threads
Solaris prior to version 9
Windows with the ThreadFiber package
62
Two-level Model
Similar to M:M, except that it allows a user thread to be
bound to kernel thread
Examples
IRIX
HP-UX
Tru64 UNIX
Solaris 8 and earlier
63
Thread Cancellation?
Thread cancellation involves terminating a thread before it has completed.
A thread that is to be canceled is often referred to as the target thread.
Two different ways of Cancellation.
Asynchronous cancellation: One thread immediately terminates the
target thread.
Deferred cancellation: The target thread periodically checks whether it
should terminate, allowing it an opportunity to terminate itself in an
orderly fashion.
Main challenge?
Handling the resources have been allocated to a canceled thread or
where a thread is canceled before updating its data which is shared
with other?
64
Summary
Process States
Process Schedulers
Process Operations
Process Communication and Synchronisation
Threads and Thread Models
65