[go: up one dir, main page]

0% found this document useful (0 votes)
23 views12 pages

Unit - 2

The document covers CPU scheduling and process synchronization in operating systems, detailing various scheduling algorithms such as FCFS, SJF, Priority Scheduling, and Round-Robin. It explains the concepts of preemptive and nonpreemptive scheduling, the role of the dispatcher, and the critical-section problem along with solutions like Peterson's. Additionally, it outlines scheduling criteria and the importance of mutual exclusion, progress, and bounded waiting in process synchronization.

Uploaded by

hemanthnk04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views12 pages

Unit - 2

The document covers CPU scheduling and process synchronization in operating systems, detailing various scheduling algorithms such as FCFS, SJF, Priority Scheduling, and Round-Robin. It explains the concepts of preemptive and nonpreemptive scheduling, the role of the dispatcher, and the critical-section problem along with solutions like Peterson's. Additionally, it outlines scheduling criteria and the importance of mutual exclusion, progress, and bounded waiting in process synchronization.

Uploaded by

hemanthnk04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

20CSM6701

FUNDAMENTALS OF OPERATING SYSTEMS


UNIT - 2

CPU Scheduling: Basic Concepts, Scheduling Criteria, Scheduling Algorithms:


FCFS, SJFS, Priority Scheduling, Round-Robin Scheduling.
Process Synchronization: Background, The Critical-Section Problem, Peterson’s
Solution, Semaphores-Semaphore Usage.

CPU Scheduling:
 In a system with a single CPU core, only one process can run at a time. Others must
wait until the CPU’s core is free and can be rescheduled.
 The objective of multi-programming is to have some process running at all times, to
maximize CPU utilization.
 The idea is relatively simple. A process is executed until it must wait, typically for the
completion of some I/O request. In a simple computer system, the CPU then just sits
idle.
 On a multicore system, this concept of keeping the CPU busy is extended to all
processing cores on the system.
CPU Scheduler
 Whenever the CPU becomes idle, the operating system must select one of the processes
in the ready queue to be executed.
 The selection process is carried out by the CPU scheduler, which selects a process
from the processes in memory that are ready to execute and allocates the CPU to that
process.
 The ready queue is not necessarily a first-in, first-out (FIFO) queue. As we shall see
when we consider the various scheduling algorithms, a ready queue can be implemented
as a FIFO queue, a priority queue, a tree, or simply an unordered linked list.

Preemptive and Nonpreemptive Scheduling


CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as
the result of an I/O request or an invocation of wait() for the termination of a child
process)
2. When a process switches from the running state to the ready state (for example, when
an interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example, at
completion of I/O)
4. When a process terminates
When scheduling takes place only under circumstances 1 and 4,we say that the scheduling
scheme is nonpreemptive or cooperative. Otherwise, it is preemptive. Under nonpreemptive
scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it
releases it either by terminating or by switching to the waiting state. Virtually all modern
operating systems includingWindows,macOS, Linux, and UNIX use preemptive scheduling
algorithms.

Dispatcher
The dispatcher is the module that gives control of the CPU’s core to the process selected
by the CPU scheduler. This function involves the following:
• Switching context from one process to another
• Switching to user mode
• Jumping to the proper location in the user program to resume that program
The dispatcher should be as fast as possible, since it is invoked during every context
switch. The time it takes for the dispatcher to stop one process and start another running is
known as the dispatch latency.

Scheduling Criteria:

 Different CPU Scheduling algorithms have different properties.


 The criteria include the following.
 CPU utilization: To keep the CPU as busy as possible. it range from 0 to 100 percent.
In a real system it should range from 40%(for lightly loaded system) to 90% (for heavily
loaded system)
 Throughput: The number of processes that complete their execution per time unit
 Turnaround time: The interval from the time of submission of process to the time of
completion is the turnaround time. This time is the sum of the periods spent waiting to
get into memory, waiting in the ready queue, executing in the CPU and doing I/O. The
amount of time to execute a particular process.
 Waiting time: The amount of time that a process has been waiting in the ready queue.
It is the sum of the periods spent waiting in the ready queue.
 Response time : amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
It is desirable to maximize CPU utilization and throughput and to minimize turnaround time,
waiting time and response time.

Scheduling Algorithms :
 CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU.
 There are many different CPU scheduling algorithms are there:

1. FCFS(First –Come, First-Served)


2. SJF(Shortest-Job-First)
3. Priority Scheduling
4. Round-Robin(RR)

1. FCFS (First –Come, First-Served):

 It is the simplest CPU scheduling algorithm, the process that requests the CPU first is
allocated the CPU first.
 The implementation of FCFS is easily managed with FIFO queue.
 When a process enters the ready queue its PCB is linked on to the tail of the queue.
 When CPU is free, it is allocated to the process at the head of the queue.
 FCFS is non preemptive. Once the CPU has been allocated to a process, that process
keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
Example: Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3

 Suppose that the processes arrive in the order: P1 , P2 , P3


 The Gantt Chart for the schedule is:

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17
 Suppose that the processes arrive in the order
o P2 , P3 , P1
Convoy Effect:
 All other processes wait for the one big process to get off the CPU. This effect results
in lower CPU utilization and device utilization.
The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect short process behind long process

2. SJF (Shortest-Job-First):
 Associate with each process the length of its next CPU burst. Use these lengths to
schedule the process with the shortest time
 Two schemes:
 Non preemptive – once CPU given to the process it cannot be preempted until
completes its CPU burst
 Preemptive – if a new process arrives with CPU burst length less than remaining time
of current executing process, preempt. This scheme is known as the Shortest-
Remaining-Time-First (SRTF)
 SJF is optimal – gives minimum average waiting time for a given set of processes
Example of Non-Preemptive SJF

Process Arrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

 SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16

 Average waiting time = (0 + 6 + 3 + 7)/4 = 4


Example of Preemptive SJF:

Process Arrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

SJF (preemptive)

 Average waiting time = (9 + 1 + 0 +2)/4 = 3

3. Priority Scheduling:

 SJF algorithm is a special case of the general priority scheduling algorithm.


 A priority is associated with each process, and the CPU is allocated to the process with
highest priority.
 Equal-priority processes are scheduled in FCFS order.
 An SJF algorithm is simply a priority algorithm where the priority(p) is the inverse of the
(predicted) next CPU burst .
 The large CPU burst, the lower the priority, and vice versa.
 Priorities are generally indicated by some fixed range of numbers, such as 0 to 7, or 0
to 4095. Here we assume low numbers represent high priority.
 As an example, consider the following set of processes, assumed to have arrived at
time 0, in the order p1, p2,…., p5, with the length of the CPU burst given in
milliseconds.

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Using priority scheduling, we would schedule these processes according to the following
Gantt chart:

P2 P5 P1 P3 P4
0 1 6 16 18 19
The average waiting time is 8.2 milliseconds.

 Priorities can be defined either internally or externally. Internally defined priorities are
influenced by time limits, memory requirements, the number of open files, and the ratio
of I/O burst to average CPU burst.
 External priorities are set by criteria outside operating system, such as importance of the
process, type and amount of funds being paid for computer use.
 Priority scheduling can be either preemptive or non preemptive.
 When a process arrives at the ready queue, its priority compared with the priority of
currently running process.
 A preemptive priority scheduling algorithm will preempt the CPU if the priority of newly
arrived process is higher, than the currently running process.
 A non preemptive scheduling algorithm will simply put the new process at the head of
the ready queue.

Drawback:
Starvation
 Starvation or indefinite blocking is the major problem in priority scheduling algorithms.
 A process that is ready to run but waiting for the CPU can be considered blocked.
 A priority scheduling algorithm can leave some low priority processes indefinitely.
 High priority processes can prevent low- priority processes from ever getting the CPU.
Aging
 Aging is a solution to the problem of indefinite blocking of low – priority processes.
 Aging is a technique of gradually increasing the priority of a process that wait in the
system for a long time.
 For ex: Priorities range from low to high that is 127 to 0, we could increase the priority
of a waiting process by 1 every 15 minutes.

4. Round Robin Scheduling:

 This algorithm is designed especially for time sharing systems.


 It is similar to FCFS scheduling, but pre-emption is added to switch between processes.
 A small unit of time called time quantum or time slice is defined. Time quantum is
generally from 10 to 100 ms. the ready queue is treated as a circular queue.
 The CPU scheduler goes around the ready queue, allocating the CPU to each process for
a time interval of upto 1 time quantum.
 The ready queue acts as FIFO queue of processes, new processes are added to tail of
ready queue.
 The CPU scheduler pick the first processes from the ready queue The CPU scheduler
picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum,
and dispatches the process.
 In RR scheduling we are having two possibilities first one is the process may have CPU
burst of less than 1 time quantum.
 In this case the process itself releases the CPU voluntarily. Scheduler will then
proceed to the next process in ready queue.
 Second possibility is CPU burst of currently running process is longer than 1 time
quantum, the timer will go off and cause interrupt to the operating system.
 A context switch will be executed, and the process will be executed, and the process will
be put any the tail of ready queue.
 The CPU scheduler will then select the next process in the ready queue.
 The average waiting time in RR policy is long.
 consider the following set of processes that arrive at 0, with the length of the CPU
burst given in milliseconds.
Process Burst Time
P1 24
P2 3
P3 3

 If we use time quantum of 4 milliseconds, then process p1 gets the first 4


milliseconds. Since it requires another 20 milliseconds, it is pre-empted after the first
time quantum, and CPU is given to the next process in the queue, process p2.
 Since p2 does not need 4 milliseconds, it quits before its time quantum expires. The
CPU is given to process p3.
 Once each process received 1 time quantum the CPU is returned to process p1 for
an additional time quantum
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
The average waiting time is 17/3=5.66 ms
The Critical-Section Problem:
 A cooperating process is one that can affect or be affected by other processes
executing in the system.
 Cooperating processes can either directly share a logical address space (that is, both
code and data) or be allowed to share data only through shared memory or message
passing. Concurrent access to shared data may result in data inconsistency.
 Consider a system consisting of n processes {P0, P1, ..., Pn−1}. Each process has a
segment of code, called a critical section, in which the process may be accessing —
and updating — data that is shared with at least one other process.
 The important feature of the system is that, when one process is executing in its critical
section, no other process is allowed to execute in its critical section. That is, no two
processes are executing in their critical sections at the same time.
 The critical-section problem is to design a protocol that the processes can use to
synchronize their activity so as to cooperatively share data.
 Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section. The critical section may be followed by
an exit section. The remaining code is the remainder section.

A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section next,
and this selection cannot be postponed indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request is granted.

Two general approaches are used to handle critical sections in operating systems:
preemptive kernels and nonpreemptive kernels. A preemptive kernel allows a process to be
preempted while it is running in kernel mode. A nonpreemptive kernel does not allow a process
running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel
mode, blocks, or voluntarily yields control of the CPU.

Peterson’s Solution:
 Peterson’s solution is a software-based solution to the critical-section problem.
 Peterson’s solution is restricted to two processes that alternate execution between their
critical sections and remainder sections.
 The processes are numbered P0 and P1. For convenience, when presenting Pi, we use
Pj to denote the other process; that is, j equals 1 − i.
 Peterson’s solution requires the two processes to share two data items:
int turn;
boolean flag[2];

The variable turn indicates whose turn it is to enter its critical section. That is, if
turn == i, then process Pi is allowed to execute in its critical section. The flag array is used to
indicate if a process is ready to enter its critical section. For example, if flag[i] is true, Pi is
ready to enter its critical section.
To enter the critical section, process Pi first sets flag[i] to be true and then sets turn to
the value j, thereby asserting that if the other process wishes to enter the critical section, it can
do so. If both processes try to enter at the same time, turn will be set to both i and j at roughly
the same time. Only one of these assignments will last; the other will occur but will be
overwritten immediately. The eventual value of turn determines which of the two processes is
allowed to enter its critical section first.

Semaphores:
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait() and signal(). Semaphores were introduced by
the Dutch computer scientist Edsger Dijkstra, and such, the wait() operation was originally
termed P (from the Dutch proberen, “to test”); signal() was originally called V (from verhogen,
“to increment”). The definition of wait() is as follows:

All modifications to the integer value of the semaphore in the wait() and signal() operations
must be executed atomically. That is, when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value. In addition, in the case of
wait(S), the testing of the integer value of S (S ≤ 0), as well as its possible modification (S--),
must be executed without interruption.

Semaphore Usage
Operating systems often distinguish between counting and binary semaphores. The
value of a counting semaphore can range over an unrestricted domain. The value of a binary
semaphore can range only between 0 and 1. Thus, binary semaphores behave similarly to
mutex locks. In fact, on systems that do not provide mutex locks, binary semaphores can be
used instead for providing mutual exclusion.
Counting semaphores can be used to control access to a given resource consisting of a
finite number of instances. The semaphore is initialized to the number of resources available.
Each process that wishes to use a resource performs a wait() operation on the semaphore
(thereby decrementing the count). When a process releases a resource, it performs a signal()
operation (incrementing the count). When the count for the semaphore goes to 0, all resources
are being used. After that, processes that wish to use a resource will block until the count
becomes greater than 0.
We can also use semaphores to solve various synchronization problems. For example,
consider two concurrently running processes: P1 with a statement S1 and P2 with a statement
S2. Suppose we require that S2 be executed only after S1 has completed. We can implement
this scheme readily by letting P1 and P2 share a common semaphore synch, initialized to 0. In
process P1, we insert the statements
S1;
signal(synch);
In process P2, we insert the statements
wait(synch);
S2;
Because synch is initialized to 0, P2 will execute S2 only after P1

Semaphore Implementation
 The definitions of the wait() and signal() semaphore operations leads to busy waiting
problem.
 To overcome this problem, we can modify the definition of the wait() and signal()
operationsas follows: When a process executes the wait() operation and finds that the
semaphore value is not positive, it must wait.
 However, rather than engaging in busy waiting, the process can suspend itself. The
suspend operation places a process into a waiting queue associated with the semaphore,
and the state of the process is switched to the waiting state.
 Then control is transferred to the CPU scheduler, which selects another process to
execute. A process that is suspended, waiting on a semaphore S, should be restarted
when some other process executes a signal() operation.
 The process I restarted by a wakeup() operation, which changes the process from the
waiting state to the ready state. The process is then placed in the ready queue.
 To implement semaphores under this definition, we define a semaphore as follows:
Each semaphore has an integer value and a list of processes list. When a process must wait
on a semaphore, it is added to the list of processes. A signal() operation removes one process
from the list of waiting processes and awakens that process.
Now, the wait() semaphore operation can be defined as

You might also like