Unit - 2
Unit - 2
CPU Scheduling:
In a system with a single CPU core, only one process can run at a time. Others must
wait until the CPU’s core is free and can be rescheduled.
The objective of multi-programming is to have some process running at all times, to
maximize CPU utilization.
The idea is relatively simple. A process is executed until it must wait, typically for the
completion of some I/O request. In a simple computer system, the CPU then just sits
idle.
On a multicore system, this concept of keeping the CPU busy is extended to all
processing cores on the system.
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes
in the ready queue to be executed.
The selection process is carried out by the CPU scheduler, which selects a process
from the processes in memory that are ready to execute and allocates the CPU to that
process.
The ready queue is not necessarily a first-in, first-out (FIFO) queue. As we shall see
when we consider the various scheduling algorithms, a ready queue can be implemented
as a FIFO queue, a priority queue, a tree, or simply an unordered linked list.
Dispatcher
The dispatcher is the module that gives control of the CPU’s core to the process selected
by the CPU scheduler. This function involves the following:
• Switching context from one process to another
• Switching to user mode
• Jumping to the proper location in the user program to resume that program
The dispatcher should be as fast as possible, since it is invoked during every context
switch. The time it takes for the dispatcher to stop one process and start another running is
known as the dispatch latency.
Scheduling Criteria:
Scheduling Algorithms :
CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU.
There are many different CPU scheduling algorithms are there:
It is the simplest CPU scheduling algorithm, the process that requests the CPU first is
allocated the CPU first.
The implementation of FCFS is easily managed with FIFO queue.
When a process enters the ready queue its PCB is linked on to the tail of the queue.
When CPU is free, it is allocated to the process at the head of the queue.
FCFS is non preemptive. Once the CPU has been allocated to a process, that process
keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
Example: Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
P2 P3 P1
0 3 6 30
2. SJF (Shortest-Job-First):
Associate with each process the length of its next CPU burst. Use these lengths to
schedule the process with the shortest time
Two schemes:
Non preemptive – once CPU given to the process it cannot be preempted until
completes its CPU burst
Preemptive – if a new process arrives with CPU burst length less than remaining time
of current executing process, preempt. This scheme is known as the Shortest-
Remaining-Time-First (SRTF)
SJF is optimal – gives minimum average waiting time for a given set of processes
Example of Non-Preemptive SJF
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (non-preemptive)
P1 P3 P2 P4
0 3 7 8 12 16
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
SJF (preemptive)
3. Priority Scheduling:
Using priority scheduling, we would schedule these processes according to the following
Gantt chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19
The average waiting time is 8.2 milliseconds.
Priorities can be defined either internally or externally. Internally defined priorities are
influenced by time limits, memory requirements, the number of open files, and the ratio
of I/O burst to average CPU burst.
External priorities are set by criteria outside operating system, such as importance of the
process, type and amount of funds being paid for computer use.
Priority scheduling can be either preemptive or non preemptive.
When a process arrives at the ready queue, its priority compared with the priority of
currently running process.
A preemptive priority scheduling algorithm will preempt the CPU if the priority of newly
arrived process is higher, than the currently running process.
A non preemptive scheduling algorithm will simply put the new process at the head of
the ready queue.
Drawback:
Starvation
Starvation or indefinite blocking is the major problem in priority scheduling algorithms.
A process that is ready to run but waiting for the CPU can be considered blocked.
A priority scheduling algorithm can leave some low priority processes indefinitely.
High priority processes can prevent low- priority processes from ever getting the CPU.
Aging
Aging is a solution to the problem of indefinite blocking of low – priority processes.
Aging is a technique of gradually increasing the priority of a process that wait in the
system for a long time.
For ex: Priorities range from low to high that is 127 to 0, we could increase the priority
of a waiting process by 1 every 15 minutes.
A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section next,
and this selection cannot be postponed indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request is granted.
Two general approaches are used to handle critical sections in operating systems:
preemptive kernels and nonpreemptive kernels. A preemptive kernel allows a process to be
preempted while it is running in kernel mode. A nonpreemptive kernel does not allow a process
running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel
mode, blocks, or voluntarily yields control of the CPU.
Peterson’s Solution:
Peterson’s solution is a software-based solution to the critical-section problem.
Peterson’s solution is restricted to two processes that alternate execution between their
critical sections and remainder sections.
The processes are numbered P0 and P1. For convenience, when presenting Pi, we use
Pj to denote the other process; that is, j equals 1 − i.
Peterson’s solution requires the two processes to share two data items:
int turn;
boolean flag[2];
The variable turn indicates whose turn it is to enter its critical section. That is, if
turn == i, then process Pi is allowed to execute in its critical section. The flag array is used to
indicate if a process is ready to enter its critical section. For example, if flag[i] is true, Pi is
ready to enter its critical section.
To enter the critical section, process Pi first sets flag[i] to be true and then sets turn to
the value j, thereby asserting that if the other process wishes to enter the critical section, it can
do so. If both processes try to enter at the same time, turn will be set to both i and j at roughly
the same time. Only one of these assignments will last; the other will occur but will be
overwritten immediately. The eventual value of turn determines which of the two processes is
allowed to enter its critical section first.
Semaphores:
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait() and signal(). Semaphores were introduced by
the Dutch computer scientist Edsger Dijkstra, and such, the wait() operation was originally
termed P (from the Dutch proberen, “to test”); signal() was originally called V (from verhogen,
“to increment”). The definition of wait() is as follows:
All modifications to the integer value of the semaphore in the wait() and signal() operations
must be executed atomically. That is, when one process modifies the semaphore value, no other
process can simultaneously modify that same semaphore value. In addition, in the case of
wait(S), the testing of the integer value of S (S ≤ 0), as well as its possible modification (S--),
must be executed without interruption.
Semaphore Usage
Operating systems often distinguish between counting and binary semaphores. The
value of a counting semaphore can range over an unrestricted domain. The value of a binary
semaphore can range only between 0 and 1. Thus, binary semaphores behave similarly to
mutex locks. In fact, on systems that do not provide mutex locks, binary semaphores can be
used instead for providing mutual exclusion.
Counting semaphores can be used to control access to a given resource consisting of a
finite number of instances. The semaphore is initialized to the number of resources available.
Each process that wishes to use a resource performs a wait() operation on the semaphore
(thereby decrementing the count). When a process releases a resource, it performs a signal()
operation (incrementing the count). When the count for the semaphore goes to 0, all resources
are being used. After that, processes that wish to use a resource will block until the count
becomes greater than 0.
We can also use semaphores to solve various synchronization problems. For example,
consider two concurrently running processes: P1 with a statement S1 and P2 with a statement
S2. Suppose we require that S2 be executed only after S1 has completed. We can implement
this scheme readily by letting P1 and P2 share a common semaphore synch, initialized to 0. In
process P1, we insert the statements
S1;
signal(synch);
In process P2, we insert the statements
wait(synch);
S2;
Because synch is initialized to 0, P2 will execute S2 only after P1
Semaphore Implementation
The definitions of the wait() and signal() semaphore operations leads to busy waiting
problem.
To overcome this problem, we can modify the definition of the wait() and signal()
operationsas follows: When a process executes the wait() operation and finds that the
semaphore value is not positive, it must wait.
However, rather than engaging in busy waiting, the process can suspend itself. The
suspend operation places a process into a waiting queue associated with the semaphore,
and the state of the process is switched to the waiting state.
Then control is transferred to the CPU scheduler, which selects another process to
execute. A process that is suspended, waiting on a semaphore S, should be restarted
when some other process executes a signal() operation.
The process I restarted by a wakeup() operation, which changes the process from the
waiting state to the ready state. The process is then placed in the ready queue.
To implement semaphores under this definition, we define a semaphore as follows:
Each semaphore has an integer value and a list of processes list. When a process must wait
on a semaphore, it is added to the list of processes. A signal() operation removes one process
from the list of waiting processes and awakens that process.
Now, the wait() semaphore operation can be defined as