[go: up one dir, main page]

0% found this document useful (0 votes)
72 views52 pages

Chapter 2 (Two)

The document discusses processes and threads. It defines a process as a program in execution that consists of sections like the stack, heap, text, and data. A process can create child processes, forming a process hierarchy. Threads are lightweight processes that allow for parallel execution within a process by sharing the process's resources. Using threads provides concurrency without the overhead of multiple processes.

Uploaded by

amanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views52 pages

Chapter 2 (Two)

The document discusses processes and threads. It defines a process as a program in execution that consists of sections like the stack, heap, text, and data. A process can create child processes, forming a process hierarchy. Threads are lightweight processes that allow for parallel execution within a process by sharing the process's resources. Using threads provides concurrency without the overhead of multiple processes.

Uploaded by

amanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

process management and

thread
Chapter Two

Compiled by Abel H. abelhailu320@gmail.com 1


Introduction
 Originallycomputers can only run one program at a time,
which had total access to all of the computers resource.
But modern systems run many programs concurrently, with
the operating system running its own tasks to do its own
work. Multitasking and multiusers system need to be
distinguishing between different programs being executed
by the system. This is accomplished with the help of
process.
 The process manager is one of the major parts of operating
system.
Compiled by Abel H. belamine320@gmail.com 2
Cont.

➢A process is basically a program in execution. The execution


of a process must progress in a sequential fashion.
➢A process is defined as an entity which represents the basic
unit of work to be implemented in the system.
➢ When a program is loaded into the memory and it becomes a
process, and it can be divided into four sections.

Compiled by Abel H. belamine320@gmail.com 3


Cont.

1. Stack The process Stack contains the temporary data such


as method/function parameters, return address and local
variables.
2. Heap This is dynamically allocated memory to a process
during its run time.
3. 3. Text This includes the current activity represented by
the value of Program Counter and the contents of the
processor's registers.
4. 4. Data This section contains the global and static
variables.
Compiled by Abel H. belamine320@gmail.com 4
Process state

 As a process executes, it changes state


1. new: The process is being created
2. running: Instructions are being executed
3. waiting: The process is waiting for some event to occur
4. ready: The process is waiting to be assigned to a processor
5. terminated: The process has finished execution

Compiled by Abel H. belamine320@gmail.com 5


Process state

New Terminate
Interrupt
Admitted
Exit

Ready Running
Scheduler
Dispatch
I/O or event Input output
completion (Event Wait)
Blocked
Compiled by Abel H. belamine320@gmail.com 6
Two State Process Model
 Twostate process model refers to running and non-running states
which are described below.
 1.Running: When new process is created by Operating System that
process enters into the system as in the running state.
 2.Non-Running Processes that are not running are kept in
queue, waiting for their turn to execute. Each entry in the
queue is a pointer to a particular process. Queue is
implemented by using linked list. Use of dispatcher is as follows.
When a process is interrupted, that process is transferred in the
waiting queue. If the process has completed or aborted, the
process is discarded. In either case, the dispatcher then
selects a process from the queue to execute.
Compiled by Abel H. belamine320@gmail.com 7
Process hierarchy
✓ In some systems, when a process creates another process, the parent
process and child process continue to be associated in certain ways.
✓ The child process can it self create more process. Forming process
hierarchy.
✓ Note that unlike plants and animals that use sexual reproduction, a
process has only one parent (but zero, one, two, or more children).
✓ In contrast, windows does not have any concept of a process hierarchy,
All process are equal. The only place where there is something like a
process hierarchy is that when a process is created the parent is given a
special token called handle, that it can use to handle the child. How
ever its free to pass this token to some other process, thus invalidating
the hierarchy.
Compiled by Abel H. belamine320@gmail.com 8

✓ But process in UNIX cannot disinherit their children.


Context switch
✓ When ever an interrupt occurs process need to go to waiting
state and system has to save the context of the process,
running currently. It can be restored again from the same place
when processing is done.
✓ Switchingthe CPU to another process, saving the state of
current process and a state restore of different process is
called context switching.

Compiled by Abel H. belamine320@gmail.com 9


Threads

✓ thread is a flow of execution through the process code, with


its own program counter, system registers and stack. A thread is
also called a light weight process
✓ Each thread belongs to exactly one process and no thread
can exist outside a process.
✓ They also provide a suitable foundation for parallel execution
of applications on shared memory multiprocessors.

Compiled by Abel H. belamine320@gmail.com 10


Process Vs Thread
NO Process Thread

1 Process is heavy weight or resource Thread is light weight taking lesser resources
intensive. than a process

2 Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.

3 In multiple processing environments each All threads can share same set of open files,
process executes the same code but has child processes.
its own memory and file resources.
4 If one process is blocked then no other While one thread is blocked and waiting,
process can execute until the first second thread in the same task can run
process is unblocked.
5 Multiple processes without using threads Multiple threaded processes use fewer
use more resources. resources.

6 In multiple processes each process One thread can read, write or change another
Compiled by Abel H. belamine320@gmail.com 11
operates independently of the others thread's data.
Concurrency
Imagine a web server, which might like to handle multiple
requests concurrently
While waiting for the credit card server to approve a purchase for one
client, it could be retrieving the data requested by another client from
disk, and assembling the response for a third client from cached
information
Imagine a web browser, which might like to initiate multiple
requests concurrently
While browser displays images or text, it retrieves data from the
network.

A word processor
For example, displaying graphics, responding to keystrokes from the
user, and performing spelling and grammar checking in the
background.
What’s needed?
In each of these examples of concurrency (web server, web
browser, word processor):
Everybody wants to run the same code
Everybody wants to access the same data
Everybody has the same privileges (most of the time)
Everybody uses the same resources (open files, network
connections, etc.)
But you’d like to have multiple hardware execution states:
an execution stack and stack pointer (SP)
traces state of procedure calls made
the program counter (PC), indicating the next instruction
a set of general-purpose processor registers and their values
How could we achieve this?

Given the process abstraction as we know it:


fork several processes

This is really inefficient!!


Resource intensive→ ex: space: PCB, page tables,
etc.
Time consuming→ creating OS structures, fork and
copy address space, etc.

So any support that the OS can give for doing


multi-threaded programming is a win
Single-Threaded Example

Imagine the following C program:


main() {
ComputePI(“pi.txt”);
PrintClassList(“clist.text”);
}
What is the behavior here?
Program would never print out class list, because “ComputePI”
would never finish.
Use of Threads
Version of program with Threads:

main() {
CreateThread(ComputePI(“pi.txt”));
CreateThread(PrintClassList(“clist.text”));
}
What does “CreateThread” do?
Start independent thread running for a given procedure

What is the behavior here?


Now, you would actually see the class list
This should behave as if there are two separate CPUs
Multithreaded server architecture

(2) Create new thread to


(1)Request service the request

(3) Resume listening for


additional client requests
Threads and processes

Most modern OS’s (NT, modern UNIX, etc)


therefore support two entities:
the process, which defines the address space and
general process attributes (such as open files, etc.)
the thread, which defines a sequential execution stream
within a process

A thread
is a basic unit of CPU utilization; it comprises a thread ID, PC, a
register set, and a stack.
Shares with other threads belonging to the same process its code
and data sections, and other OS resources (ex: open files and
signals)
Threads of the same process are not protected from each other.
Single and Multithreaded Processes
Process address space
thread 1 stack
SP (T1)

thread 2 stack stack


SP (T2) (dynamic allocated mem)
SP
thread 3 stack
SP (T3)
heap
(dynamic allocated mem)
heap
(dynamic allocated mem) static data
(data segment)
static data
(data segment) code
PC
(text segment)
PC (T2)
code
PC (T1)
(text segment)
PC (T3)
Benefits of multithreaded
Responsiveness:
• A multithreaded interactive application allows a program to continue running
even if part of it is blocked or performing a lengthy operation. Thereby
increasing responsiveness to the user.

Resource Sharing (code, data, files)


• Threads share the memory and resources of the process to which they belong
by default.
• Sharing data between threads is cheaper than processes → all see the same
address space.

Economy
• Creating and destroying threads is cheaper than processes.
• Context switching between threads is also cheaper.
• It’s much easier to communicate between threads.
Benefits of multithreaded

Scalability
• Multithreading can be greatly increased in a multiprocessor systems
• Threads may be running in parallel on different processors.
Multicore Programming
On a single-core system, concurrency means that the execution of
threads will be interleaved over time – executing only one thread at a
time.

Parallel execution for threads on a multi-core system.


User and Kernel Threads
User threads:
are visible to the programmer and unknown to the kernel.
thread management done by user-level threads library, without kernel
support.

Kernel threads:
Most OS kernels are multi-threaded.
Several threads operate in the kernel, each performing a specific task.
Ex: managing devices, interrupt handling.
Supported and managed directly by the Kernel.
Examples: Windows XP/2000, Solaris, Linux, Tru64 UNIX, Mac OS X.

User-level threads are faster to create and manage than are kernel threads.
Why?
Because no intervention from the kernel is required.
Multithreading Models

A relationship must exist between user threads and kernel threads,


established by one of three ways:

▪ Many-to-One

▪ One-to-One

▪ Many-to-Many
Many-to-One
Many user-level threads mapped to single kernel thread:
Thread management is done by the thread library in user space →
efficient.
The entire process will block if a thread makes a blocking system call.
Because only one thread can access the kernel at a time, multiple
threads are unable to run in parallel on multiprocessors.

Examples:
Solaris Green Threads
GNU Portable Threads
One-to-One
Each user-level thread maps to kernel thread
Adv : allows another thread to run when a thread makes a blocking system
call → more concurrency.
Adv : allows multiple threads to run in parallel on multiprocessors.
Dis : creating a user thread requires creating corresponding kernel thread →
can burden the applications performance.
Examples
Windows NT/XP/2000
Linux
Solaris 9 and later
Many-to-Many Model
Allows many user level threads to be mapped to many kernel threads
Does not suffer from the shortcomings of the previous two models. How? read
P159
Solaris prior to version 9
Two-level Model
Similar to M:M, except that it allows a user thread to be bound to kernel
thread
Examples
IRIX
HP-UX
Tru64 UNIX
Solaris 8 and earlier
Thread Libraries
Thread library provides programmer with API for creating and managing threads

Two primary ways of implementing a thread library:


Library entirely in user space (all code and data structures for the library in
user space)
Invoking a function in the API ->local function call in user space and not
a system call.
Kernel-level library supported by the OS (all code and data structures for the
library in kernel space)
Invoking a function in the API -> system call to the kernel.

Three primary thread libraries:


POSIX Pthreads (maybe KL or UL), common in UNIX operating systems
Win32 threads (KL), in Windows systems.
Java threads (UL), in JVM.
CPU scheduling concepts
 CPU and I/O Bound
✓ Some process spend most of their time computing (CPU
Bound process), while others spend most of their time
waiting for I/O (I/O Bound process).
✓ CPU bound processes typically have long CPU burst and thus
infrequent I/O waits.
✓ Where as I/O bound processes have short CPU burst and thus
frequent I/O waits.
✓ Note that in case of both CPU Bound and I/O bound the key
factor is the length of the CPU burst time not the length of
I/O burst.
Compiled by Abel H. belamine320@gmail.com 31
CPU scheduling concepts

 Process schedulers
✓ thescheduler is the concept of an operating system that
determines which process should be run and when.
✓ CPU scheduling is the process of selecting a process and
allocating the processor to the selected process for
execution.
✓ There are 3 distinct types of schedulers: a long-term
scheduler (also know as an admission scheduler or high-level
scheduler), a mid-term or medium-term scheduler and a
short-term scheduler (also known as a dispatcher).
Compiled by Abel H. belamine320@gmail.com 32
Characteristics of perfect CPU
scheduler

❖ Minimum latency: Response or job completion time.


❖ Maximum Throughput: Maximum jobs/time.
❖ Maximum Utilization: Keep I/O busy.
❖ Fairness: Everyone makes progress, no one starves.

Compiled by Abel H. belamine320@gmail.com 33


Types of schedulers
 1. Long Term Scheduler (Job scheduler)
✓ The long term scheduler decides which jobs or process are to be
admitted to the ready queue, which is infrequent decision.
✓ When an attempt is made to execute a program, its admission to
the set of currently executing process is either authorized or
delayed by the long-term scheduler.
✓ Job scheduler selects processes from the queue and loads them
into memory for execution.
✓ The primary objective of the job scheduler is to provide a
balanced mix of jobs, such as I/O bound and processor
Compiled by Abel H. belamine320@gmail.com 34

bound.
Types of schedulers
 2. Short Term Scheduler (CPU scheduler)
✓ Main objective is increasing system performance in accordance with
the chosen set of criteria.
✓ It is the change of ready state to running state of the
process.
✓ CPU scheduler selects process among the processes that are
ready to execute and allocates CPU to one of them.
✓ Short term scheduler also known as dispatcher, execute most
frequently and makes the fine grained decision of which process
Compiled by Abel H. belamine320@gmail.com 35

to execute next
Types of schedulers
 3. Medium term scheduler
✓ Medium term scheduling is part of the swapping.
✓ It removes the processes from the memory.
✓ The medium term scheduler is in-charge of handling the
swapped out-processes.
✓ Suspended processes cannot make any progress towards
completion. In this condition, to remove the process from
memory and make space for other process, the suspended process
is moved to the secondary storage. This process is called
Compiled by Abel H. belamine320@gmail.com 36

swapping,
Comparison between Schedulers
S.N Long Term scheduler Short term scheduler Medium term scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing
in time sharing system sharing system systems

5 It selects processes from It selects those processes It can re-introduce the process
pool
Compiled by Abeland
H. loads them into
belamine320@gmail.com which are ready to into memory
37 and execution
memory for execution execute can be continued.
Process Scheduling Algorithms
 Many algorithms are provided there to schedule the CPU. We should
have some criteria to select an algorithm, which suits the
circumstances. Some of the major criteria are
 1. CPU Utilization = keep the CPU as busy as possible.
 2. Throughput = number of processes that complete their execution per
time unit.
 3. Turnaround time = amount of time to execute a particular process.
 4. Waiting time = amount of time a process has been waiting in the
ready queue.
 5. Response time = amount of time it takes, from when a request was
submitted until the first response is produced, not output.
Compiled by Abel H. belamine320@gmail.com 38
Process Scheduling Algorithms
 Scheduling is the method by which a thread, process or data flows
are given access to system resources. Scheduling can be divided
into the following categories.
 1.Non –Preemptive: a non preemptive scheduling algorithm picks
a process to run and then just lets it run until it blocks (either on
I/O or waiting for another process) or until it voluntarily releases
the CPU. Even if it runs for hours it will not be forcefully
suspended.
 2. Preemptive: this picks a process and lets it run for maximum of
some fixed time. If it is still running at the end of the time interval,
it is suspended and the scheduler pics another process (if one is
Compiled by Abel H. belamine320@gmail.com 39

available).
Various scheduling algorithms
➢ First Come First Serve (FCFS) Scheduling
➢ Shortest-Job-First (SJF) Scheduling
➢ Round Robin(RR) Scheduling
➢ Priority Scheduling
➢ Multilevel Queue Scheduling
➢ Realtime scheduling
➢ Dynamic Scheduling algorithm (RMA, EDF,…)
➢ …..
Compiled by Abel H. belamine320@gmail.com 40
Some acronyms
➢ C.T= Completion Time = the time a process completes executing.
➢ A.T=> Arriving Time = the time a process requests for a CPU.
➢ T.A.T=> Turnaround Time= total time, from requesting CPU until
it finishes executing.
➢ W.T=> Waiting Time= amount of time a process has been waiting
in the ready queue.
➢ B.T=> Burst Time= The Execution time of a process.
➢ R.T=> Response Time= the first response time for the process.
➢ A.W.T=> Average Waiting Time
Compiled by Abel H. belamine320@gmail.com 41

➢ A.T.A.T=> Average Turnaround Time


Real-time scheduling
A real - time system is a computer system that must satisfy
bounded response - time constraints or risk severe consequences,
including failure. But what is a “ failed ” system? In the case of the
space shuttle or a nuclear power plant, for example, it is painfully
obvious when a failure has occurred. For other systems, such as an
automatic bank teller machine, the notion of failure is less
obvious. For now, failure will be defined as the “ inability of the
system to perform according to system specification, ” or, more
precisely.
 So in general real time system have two essential characteristics.
1. RS must produce correct computation result called logical or
functional correctness
Compiled by Abel H. belamine320@gmail.com 42
Cont.
2. Those computations (execution of tasks ) must conclude with in
predefined period called timing correctness.
A real - time system is one whose logical correctness is based on both
the correctness of the function (algorithms) and time constraints.
Response Time
The time between the presentation of a set of inputs to a system and
the realization of the required behavior, including the availability of
all associated outputs, is called the response time of the system. How
fast and punctual the response time needs to be depends on the
characteristics and purpose of the specific system. The previous
definitions set the stage for a practical definition of a real –time
Compiled by Abel H. belamine320@gmail.com 43

system.
Cont.
Failed System
A failed system is a system that cannot satisfy one or more of the
requirements stipulated in the system requirements specification.
 Means something catastrophic occurs if the deadline is not met.
Types of Realtime system
Consider a system consisting of a set of tasks, T = {τ1, τ2, ..., τn},
where the finishing time of each task τi∈T is Fi. The system is said to
be real-time if there exists at least one task τi∈T , which falls into
one of the following categories

Compiled by Abel H. belamine320@gmail.com 44


Types of Real-Time systems
1. Hard real time system: A hard real - time system is one in which
failure to meet even a single deadline may lead to complete or
catastrophic system failure.
❑ Taskτi is a hard real-time task. That is, the execution of the task τi
should be completed by a given deadline di; i.e., Ci ≤ di.
2. Soft real time system: A soft real - time system is one in which
performance is degraded but not destroyed by failure to meet
response - time constraints. Conversely, systems where failure to
meet response - time constraints leads to complete or catastrophic
system failure are called hard real - time systems.
Compiled by Abel H. belamine320@gmail.com 45
Cont.
❑ Task τi is a soft real-time task. That is, the later the task τi finishes its
computation after a given deadline di, the more penalty it pays. A
penalty function P(τi) is defined for the task. If Ci ≤ di, the penalty
function P(τi) is zero. Otherwise P(τi) > 0. The value of P(τi) is an
increasing function of Ci − di.
3. Firm real time system: which are such that the sooner they finish
their computations before their deadlines , the more rewards they get.
Task τi is a firm real-time task. That is, the earlier the task τi finishes its
computation after a given deadline di, the more reward it gets . A reward
function
R(τi) is defined for the task. If Ci>= di, the reward function R(τi) is zero.
Otherwise R(τi) > 0.
Compiled by Abel H. belamine320@gmail.com 46
Rate monotonic scheduling algorithm
 Rate monotonic algorithm is a dynamic preemptive algorithm based on static
priorities. The rate monotonic algorithm assigns static priorities based on task
periods. Here, task period is the time after which the task repeats and inverse
of period is task arrival rate.
E.g. Given
Process No capacity period
p1 3 20
p2 2 5
p3 2 10
A process with a least period is given highest priority on the scheduling algorithm,
next a process with the second least process period continues.
Compiled by Abel H. belamine320@gmail.com 47
Solution

Compiled by Abel H. belamine320@gmail.com 48


Earliest Deadline First
 EDF algorithm is an optimal dynamic preemptive algorithm based on dynamic
priorates. In this, after any significant event, the task with the earliest deadline
is assigned the highest dynamic priority. A significant event in a system can be
locking of a task, invocation of a task, completion of a task etc. the processor
utilization can be to 100% with EDF, even when the task periods are not multiples
of the smallest period. The dispatcher operates in the same way as the
dispatcher for the rate monolithic algorithm.
 E.g. Given
Process no Capacity Deadline Period
P1 3 7 20
P2 2 4 5
P3 Compiled by Abel H.
2belamine320@gmail.com
8 10 49
solution

Compiled by Abel H. belamine320@gmail.com 50


Cont.
 the processes must finish their execution before it reaches the red
bar(deadline), and it repeats its execution every time it reaches
the green bar(period). This scheduling algorithm repeats its cycle
every time it ends one execution on the green bar.

Compiled by Abel H. belamine320@gmail.com 51


End of chapter two(2)

Compiled by Abel H. belamine320@gmail.com 52

You might also like