[go: up one dir, main page]

0% found this document useful (0 votes)
21 views21 pages

Lecture 5

The document covers various aspects of operating systems, focusing on threads and processes, including their definitions, advantages, and lifecycle. It discusses user-level and kernel-level threads, multithreading models, CPU scheduling techniques, and performance metrics for scheduling algorithms. Additionally, it highlights the differences between multithreading and multitasking, as well as the importance of effective CPU scheduling in optimizing resource utilization.

Uploaded by

snipertomato24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views21 pages

Lecture 5

The document covers various aspects of operating systems, focusing on threads and processes, including their definitions, advantages, and lifecycle. It discusses user-level and kernel-level threads, multithreading models, CPU scheduling techniques, and performance metrics for scheduling algorithms. Additionally, it highlights the differences between multithreading and multitasking, as well as the importance of effective CPU scheduling in optimizing resource utilization.

Uploaded by

snipertomato24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

OPERATING

SYSTEM: CSET209
S

QUIZ

Which of the following need not necessarily be saved on a context switch


between processes?

(A) General purpose registers


(B) Translation look-aside buffer
(C) Program counter
(D) All of the above
S

CONTENTS COVERED

• Threads in Operating System


• Kernel Vs User level threads
• Process vs Threads
• Multithreading
• CPU scheduling concepts, metrics, methods
PROCESS QUEUE IN OPERATING SYSTEM
THREADS IN OPERATING SYSTEM

Definition: A thread is the smallest unit of processing that can be


performed in an Operating System (OS).
Example: MS Word uses many threads - formatting text from one
thread, processing input from another thread, etc.
Advantages of Thread over process:
 It takes far less time to create a new thread in an existing process
than to create a new process.
 Threads can share the common data, they do not need to use
Inter- Process communication.
 Context switching is faster when working with threads.
 It takes less time to terminate a thread than a process.
Why thread is called lightweight process: Each thread contains its
own register and stack. However, thread share code, data and files, i.e.
thread within a process share address space. Threads provide a way to
improve application performance through parallelism.
IMPLEMENTATION LEVELS OF THREADS
1. User Level Threads The operating
system does not recognize the user-level
thread. User threads can be easily
implemented and it is implemented by the
user. Examples: Java thread, POSIX threads,
etc.

2. Kernel Level Threads − Te kernel-level


thread is implemented by the operating
system. The kernel knows about all the
threads and manages them. Example:
Window Solaris.
User threads are mapped to kernel threads by
the threads library. The way this mapping is done
is called the thread model.
USER AND KERNEL LEVEL THREAD
Advantages of User Level Thread
• Thread switching does not require Kernel mode privileges.
• User level thread can run on any operating system and are fast to create and manage.
Disadvantages of User Level Thread
• In a typical operating system, most system calls are blocking.
• Multithreaded application cannot take advantage of multiprocessing.
• If a thread causes a page fault, the entire process is blocked.
USER AND KERNEL LEVEL THREAD
Advantages of Kernel Level Threads
• Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
Disadvantages of Kernel Level Threads
• Kernel threads are generally slower to create and manage than the user threads.
LIFE CYCLE OF A THREAD

There are various stages in the lifecycle of a thread. Following are the stages a thread
goes through in its whole life.

• New: The lifecycle of a born thread (new thread) starts in this state. It remains in
this state till a program starts.
• Runnable : A thread becomes runnable after it starts. It is considered to be
executing the task given to it.
• Waiting : While waiting for another thread to perform a task, the currently running
thread goes into the waiting state and then transitions back again after receiving a
signal from the other thread.
• Timed Waiting: A runnable thread enters into this state for a specific time interval
and then transitions back when the time interval expires or the event the thread was
waiting for occurs.
• Terminated (Dead) : A thread enters into this state after completing its task.
BENEFITS OF THREADS
 Enhanced throughput of the system: When the process is split into many threads, and each thread is
treated as a job, the number of jobs done in the unit time increases. That is why the throughput of the system
also increases.
 Effective Utilization of Multiprocessor system: When you have more than one thread in one process, you
can schedule more than one thread in more than one processor.
 Faster context switch: The context switching period between threads is less than the process context
switching. The process context switch means more overhead for the CPU.
 Responsiveness: When the process is split into several threads, and when a thread completes its execution,
that process can be responded to as soon as possible.
 Communication: Multiple-thread communication is simple because the threads share the same address
space, while in process, we adopt just a few exclusive communication strategies for communication
between two processes.
 Resource sharing: Resources can be shared between all threads within a process, such as code, data, and
files. Note: The stack and register cannot be shared between threads. There is a stack and register for each
thread.
PROCESS VS THREADS
 Thread is an execution unit that is part of a process. A process can have multiple threads, all
executing at the same time.
Parameter Process Thread
Process means a program is in Thread means a segment of a
Definition
execution. process.
Lightweight The process are heavyweight. Threads are Lightweight.
The process takes more time to The thread takes less time to
Termination time
terminate. terminate.
Creation time It takes more time for creation. It takes less time for creation.
Communication between Communication between
Communication processes needs more time threads requires less time
compared to thread. compared to processes.
It takes more time for context It takes less time for context
Context switching time
switching. switching.
Process consume more Thread consume fewer
Resource
resources. resources.
Threads share data with each
Sharing It does not share data
other.
MULTITHREADING IN OS
 Multithreading allows the application to divide its task into individual threads. In multi-threads, the same
process or task can be done by the number of threads, or we can say that there is more than one thread to
perform the task in multithreading. With the use of multithreading, multitasking can be achieved.
 The main drawback of single threading systems is that only one task can be performed at a time, so to
overcome the drawback of this single threading, there is multithreading that allows multiple tasks to be
performed.

Example: In this example client1,


client2, and client3 are accessing the
web server without any waiting. In
multithreading, several tasks can run
at the same time.
3 TYPES OF MULTITHREADING MODELS
1. Many to one multithreading model: The many to one model maps many user levels threads to one kernel
thread. Thread management is done by the thread library in user space. because only one thread can access
the kernel at a time, multiple threads are unable to run in parallel on multicore systems. Very few systems
use this.
2. One to one multithreading model: The one-to-one model maps a single user-level thread to a single
kernel-level thread. This type of relationship facilitates the running of multiple threads in parallel. But the
developer has to be careful not to create too many threads. E.g. LINUX and Windows implement this
model.
3. Many to Many Model multithreading model: In this type of model, there are several user-level threads
and several kernel-level threads. The number of kernel threads created depends upon a particular application.
The developer can create as many threads at both levels but may not be the same.
MULTITHREADING VS MULTITASKING

Feature Multithreading Multitasking

Running multiple threads within a single Running multiple programs or tasks


Definition program simultaneously. concurrently.

Web browser loading a page, handling user Listening to music, browsing the web, and
Example input, and downloading files simultaneously. typing a document at the same time.

Scope Within a single program. Across multiple programs.

Utilizes CPU resources more efficiently Manages system resources to allocate time
Resource Use within a program. and memory to different programs.

Improves overall system efficiency by


Enhances the performance and
Purpose responsiveness of a single application.
allowing concurrent execution of multiple
programs.

Threads are managed by the program Programs are managed by the operating
Switching itself. system, which switches between them.
CPU SCHEDULING

 In Multiprogramming systems, the Operating system schedules the processes on the CPU to have the
maximum utilization of it and this procedure is called CPU scheduling. Scheduling: Scheduling is
selecting the process to be executed next on the CPU. The Operating System uses various scheduling
algorithm to schedule the processes.
 This is a task of the short term scheduler to schedule the CPU for the number of processes present in the Job
Pool. Whenever the running process requests some IO operation then the short term scheduler saves the
current context of the process (also called PCB) and changes its state from running to waiting. During the
time, process is in waiting state; the Short term scheduler picks another process from the ready queue and
assigns the CPU to this process. This procedure is called context switching.

Why do we need Scheduling?


 In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the time, the
CPU remains idol. The task of Operating system is to optimize the utilization of resources.
 If most of the running processes change their state from running to waiting then there may always be
a possibility of deadlock in the system. Hence to reduce this overhead, the OS needs to schedule the jobs
to get the optimal utilization of CPU and to avoid the possibility to deadlock.
WHEN CPU SCHEDULING IS DONE?

• CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state. For example, I/O
request, or invocation of the wait for the termination of one of the child processes.
2. When a process switches from the running state to the ready state. For example, when
an interrupt occurs.
3. When a process switches from the waiting state to the ready state. For example,
completion of I/O.
4. When a process terminates.
TYPES OF CPU SCHEDULING
There are two kinds of CPU Scheduling techniques:
 1. Non-preemptive Scheduling: Here when a process is allocated CPU, it keeps the
processor with it till it release the processor voluntarily either by reaching to terminating state
or waiting state.
2. Preemptive Scheduling: In preemptive scheduling, the CPU can be taken back from the
process at any time during the execution of the process
When scheduling is Preemptive or Non-Preemptive?
To determine if scheduling is preemptive or non-preemptive, consider these four parameters:
1. A process switches from the running to the waiting state.
2. Specific process switches from the running state to the ready state.
3. Specific process switches from the waiting state to the ready state.
4. Process finished its execution and terminated.
 Only conditions 1 and 4 apply, the scheduling is called non- preemptive. All other
scheduling are preemptive
METRICS TO MEASURE PERFORMANCE OF A CPU SCHEDULING ALGO

Many criteria have been suggested for comparing CPU


scheduling algorithms, including:
 CPU utilization - CPU should be kept as busy as possible

 Throughput - Number of processes completed per unit time


is called throughput.
 Turnaround time - The interval from the time of submission
of a process to the time of completion is the turnaround time.
Turnaround time is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the
CPU, and doing I/O.
 Waiting time - Waiting time is the sum of the period spent
waiting in the ready queue.
 Response time - Response time is the time from the
submission of a request until the first response is produced.
TYPES OF CPU SCHEDULING ALGORITHM
 There are mainly eight types of process scheduling algorithms
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time (SRTF)
4. Priority Scheduling
5. Round Robin Scheduling
6. Multilevel Queue and Feedback Scheduling
7. Highest Response Ratio Next
8. lottery scheduling
THANK YOU
?

You might also like