[go: up one dir, main page]

0% found this document useful (0 votes)
6 views49 pages

Group 2 OS Work

The document discusses process scheduling in operating systems, detailing the definition of processes, their management, and the various states they can be in. It outlines different scheduling algorithms such as First-Come, First-Served, Shortest-Job-First, and Round-Robin, along with their advantages and disadvantages. Additionally, it explains the role of schedulers and the importance of optimizing CPU utilization and response times in multiprogramming environments.

Uploaded by

markkifunye159
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views49 pages

Group 2 OS Work

The document discusses process scheduling in operating systems, detailing the definition of processes, their management, and the various states they can be in. It outlines different scheduling algorithms such as First-Come, First-Served, Shortest-Job-First, and Round-Robin, along with their advantages and disadvantages. Additionally, it explains the role of schedulers and the importance of optimizing CPU utilization and response times in multiprogramming environments.

Uploaded by

markkifunye159
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

OPERATING SYSTEM RESEARCH WORK

GROUP II
LECTURER: MADAM BARBARA ASINGIRWE KABWIGA

GROUP MEMBERS

KIFUNYE MARK BU/UP/2023/0955


MANGARA EMMANUEL FRED BU/UP/2023/0956
NAMAKALI JACOB BU/UP/2023/3641
KINYERA GENDA JOEL BU/UP/2023/3638
RUBANGAKENE DENIS BU/UP/2023/0973
PROCESS SCHEDULING

 A program in execution is called a process


 It is an instance of a program that actually runs
 Two essential process elements are program code and a set of data associated with that code.
 Process memory is divided into four sections:
 The program memory stores the compiled program code, read in from non-volatile storage when the
program is launched.
 The data section stores global and static variables, allocated and initialized before executing the main
program.
 The heap section is used to manage dynamic memory allocation inside our program. In other words, it is
the portion of memory where dynamically allocated memory resides i.e., memory allocation via new or
malloc and memory deallocation via delete or free, etc.
 The stack section stores local variables defined inside our program or function. Stack space is created
for local variables when declared, and the space is freed up when they go out of scope.
PROCESS…….

 Process Management: The operating system is responsible for the following activities in
connection with process management: the creation and deletion of both user and
system processes; the scheduling of processes; and the provision of mechanisms for
synchronization, communication, and deadlock handling for processes.
 Process, on the other hand, includes:
 Current value of Program Counter (PC)
 Contents of the processors registers
 Value of the variables
 The processes stack (SP) which typically contains temporary data such as subroutine
parameter, return address, and temporary variables.
 A data section that contains global variables
Process Control Block

 Each process is represented in the operating system by a process


control block (PCS)—also called a task control block.
States of a Process in OS

 As a process executes, it changes state. The state of a process is defined in


part by the current activity of that process. Each process may be in one of
the following states:
 New State: The process being created.
 Running State: A process is said to be running if it has the CPU, that is, process
actually using the CPU at that particular instant.
 Blocked (or waiting) State: A process is said to be blocked if it is waiting for some
event to happen such that as an I/O completion before it can proceed. Note that a
process is unable to run until some external event happens.
 Ready State: A process is said to be ready if it is waiting to be assigned to a
processor.
 Terminated state: The process has finished execution.
Process States
PROCESS SCHEDULING

CPU scheduling is the basis of multiprogrammed operating systems.


By switching the CPU among processes, the operating system can make the
computer more productive.
In a single-processor system, only one process can run at a time. Others
must wait until the CPU is free and can be rescheduled. The objective of
multiprogramming is to have some process running at all times, to maximize
CPU utilization. The idea is relatively simple. A process is executed until
it must wait, typically for the completion of some I/O request. In a simple
computer system, the CPU then just sits idle. All this waiting time is wasted;
no useful work is accomplished,
CTD…..

. Scheduling Objectives
Maximize throughput.
Maximize number of users receiving acceptable response times.
Be predictable.
Balance resource use.
Avoid indefinite postponement.
Enforce Priorities.
Give preference to processes holding key resources
CTD…..

. With multiprogramming, we try to use this


time productively. Several processes are kept in memory at one time
CPU is always busy in Multiprogramming. Because CPU switches from one job
to another job. But in
simple computers CPU sit idle until the I/O request granted.
scheduling is a important OS function. All resources are scheduled before use.
(cpu, memory devices…..)
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded into
the executable memory at a time and the loaded process shares the CPU using
time multiplexing
SCHEDULING QUEUES:

people live in rooms. Process are present in rooms knows as queues. There are 3types
1. job queue: when processes enter the system, they are put into a job queue, which
consists all processes in the system. Processes in the job queue reside on mass storage and
await the allocation of main memory.
2. ready queue: if a process is present in main memory and is ready to be allocated to
cpu for execution, is kept in readyqueue.
3. device queue: if a process is present in waiting state (or) waiting for an i/o event to
complete is said to bein device queue.(or)

The processes waiting for a particular I/O device is called device queue.
Schedulers :

 There are 3 schedulers


 1. Long term scheduler.
 2. Medium term scheduler
 3. Short term scheduler.
 Scheduler duties:
  Maintains the queue.
  Select the process from queues assign to CPU.
 Types of schedulers
 1. Long term scheduler:
 select the jobs from the job pool and loaded these jobs into main memory (ready
queue). Long term scheduler is also called job scheduler.
CTD….

 2. Short term scheduler:


 select the process from ready queue, and allocates it to the cpu.
 If a process requires an I/O device, which is not present available then
process enters device queue.
 short term scheduler maintains ready queue, device queue. Also called as
cpu scheduler.
 3. Medium term scheduler: if process request an I/O device in the
middle of the execution, then the process removed from the main
memory and loaded into the waiting queue. When the I/O operation
completed, then the job moved from waiting queue to ready queue.
These two operations performed by medium term scheduler.
Scheduling Criteria

 Different CPU scheduling algorithms have different properties and may favor one class of
processes over another. In choosing which algorithm to use in a particular situation, we must
consider the properties of the various algorithms.
 Many criteria have been suggested for comparing CPU scheduling algorithms.
 Criteria that are used include the following:
 CPU utilization: CPU must be kept as busy as possible. CPU utilization can range from 0 to 100 percent. In
a real system, it should range from 40 to 90 percent.
 Throughput: The number of processes that are completed per time unit.
 Turn-Around Time: It is the interval from the time of submission of a process to the time of completion.
Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue,
executing on the CPU and doing I/O.
 Waiting time: It is the amount of time that a process spends waiting in the ready queue.
 Response time: It is the time from the submission of a request until the first response is produced.
Interactive systems use response time as its measure.
Scheduling Algorithms

Preemptive Scheduling and non Preemptive Scheduling


First-Come, First-Served Scheduling
Shortest-Job-First Scheduling
Priority Scheduling
Round-Robin Scheduling
Multilevel Queue Scheduling
Multilevel Feedback Queue Scheduling

 Gantt Chart is a bar chart that is used to illustrates a particular schedule


including the start and finish times of each of the participating processes.
3
-Preemptive Scheduling

 CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for. example,
I/O request, or invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state (for example,
completion of I/O).
4. When a process terminates.

 For situations 2 and 4 are considered as Pre-emptive scheduling situations. Mach OS


X, WINDOWS 95 and all subsequent versions of WINDOWS are using Preemptive
scheduling.
First-Come, First-Served Scheduling
(FCFS)
 Example: Consider the following set of
 In FCFS, the process that requests the CPU first processes that arrive at time 0. The
is allocated the CPU first. processes are arrived in the order P1,
 FCFS scheduling algorithm is Non-preemptive. P2,Process Burst
P3, with the length of Time
the CPU burst
 Once the CPU has been allocated to a process, given
p1 in milliseconds.
24
it keeps the CPU until it releases the CPU.
 FCFS can be implemented by using FIFO
p2 3
queues. p3 3
 When a process enters the ready queue, its
Gantt chart for FCFS
PCB is linked onto the tail of the queue.
 When the CPU is free, it is allocated to the
process at the head of the queue p1 p2 p3
 The running process is then removed from the
queue.
17
FCFS …ctd
 The average waiting time under the FCFS policy is often quite long.
 The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 and 27 milliseconds for process P3.
 Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
 Convoy Effect in FCFS
 Convoy effect means, when a big process is executing in CPU, all the smaller processes must have to wait until the big process execution
completes. This will effect the performance of the system.
 Disadvantage of FCFS
 FCFS scheduling algorithm is Non-preemptive, it allows one process to keep CPU for long time. Hence it is not suitable for time sharing
systems.

 Try this Qn: Suppose we use the example above but with the processes arrived in the order P2, P3, P1. What is the average
waiting time?
 The processes coming at P2, P3, P1 the average waiting time (6 + 0 + 3)/3 = 3 milliseconds whereas the processes are came in the order
P1, P2, P3 the average waiting time is 17 milliseconds
Shortest-Job-First Scheduling (SJF)

 SJF algorithm is defined as “when the  Example: Consider the following


CPU is available, it is assigned to the processes and CPU burst in
process that has the smallest next milliseconds:
CPU burst”. If the next CPU bursts of
two processes are the same, FCFS
scheduling is used between two
processes.
 SJF is also called as Shortest-Next
CPU-Burst algorithm, because Gantt Chart of SJF algorithm:
scheduling depends on the length of
the next CPU burst of a process,
rather than its total length.
SJF …ctd

Waiting Time for Processes:


 By looking at the above table the average waiting time by
using SJF algorithm is 7ms.
 SJF gives the minimum average waiting time for a given set
of processes. SJF is optimal.
 The average waiting time decreases because moving a
short process before long process decrease the waiting
time of the short process more than it increases the waiting
time of the long process.
Difficulty with SJF
 The difficulty with the SJF algorithm is ―knowing the length
of the next CPU request‖. With Short-Term Scheduling,
there is no way to know the length of the next CPU burst. It
is not implemented practically.

 There is pre-emptive SJF algorithm called Shortest


Remaining Time First Scheduling (SRTF)
SJF …Solution for the difficulty

 One approach to this problem is to try to approximate SJF scheduling.


 We may not know the length of the next CPU burst, but we may be able to predict its value. We expect that the next CPU
burst will be similar in length to the previous ones.
 By computing an approximation of the length of the next CPU burst, we can pick the process with the shortest predicted
CPU burst.
 The next CPU burst is generally predicted as an Exponential Average of the measured lengths of previous CPU bursts.
 The following formula defines the Exponential average:

 tn be the length of the nth CPU burst (i.e. contains the most recent information).
 stores the past history.
 be our predicted value for the next CPU burst.
 α controls the relative weight of recent and past history in our prediction (0 ≤ α ≤1)
 If α=0, then , recent history has no effect
 If α=1 then , only the most recent CPU burst matters.
 If α = 1/2, so recent history and past history are equally weighted
Round-Robin …ctd

 Round-Robin (RR) scheduling algorithm is designed especially for Timesharing systems.


 RR is similar to FCFS scheduling, but preemption is added to enable the system to switch between processes.
 A small unit of time called a Time Quantum or Time Slice is defined. A time quantum is generally from 10 to 100
milliseconds in length.
 The ready queue is treated as a Circular queue. New processes are added to the tail of the ready queue.
 The CPU scheduler goes around the ready queue by allocating the CPU to each process for a time interval of up to
1 time quantum and dispatches the process.
 If a process CPU burst exceeds 1 time quantum, that process is preempted and is put back in the ready queue.
 In RR scheduling one of two things will then happen.
1. The process may have a CPU burst of less than 1 time quantum. The process itself will release the CPU
voluntarily. The scheduler will then proceed to the next process in the ready queue.
2. If the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and will
cause an interrupt to the operating system. A context switch will be executed and the process will be put at the
tail of the ready queue. The CPU scheduler will then select the next process in the ready queue.
 Consider the following set of processes that arrive at time 0 and the processes are arrived in the order P1,
P2, P3 and Time Quanta=4
Round-Robin …ctd

 The average waiting time under the RR policy is often long.


 P1 waits for 6 ms (10 - 4), P2 waits for 4 ms and P3 waits for 7 ms. Thus, the average
waiting time is 17/3 = 5.66 ms.
 The performance of the RR algorithm depends on the size of the Time
Quantum.
 If the time quantum is extremely large, the RR policy is the same as the FCFS policy.
 If the time quantum is extremely small (i.e. 1 millisecond) the RR approach can result
in a large number of context switches.
 The time taken for context switch value should be a small fraction of Time quanta
then the performance of the RR will be increased.
 Note: A rule of thumb is that 80 percent of the CPU bursts should be shorter
than the time quantum.
Round-Robin Scheduling

 Gantt chart of Round Robin Scheduling

If we use a time quantum of 4 milliseconds, then


process P1 gets the first 4 milliseconds.
Since it requires another 20 milliseconds, it is
preempted after the first time quantum and the
CPU is given to the next process in the queue,
process P2.
CPU burst of Process P2 is 3, so it does not need 4
milliseconds then it quits before its time quantum
expires. The CPU is then given to the next process
P3.
Once each process has received 1 time quantum,
the CPU is returned to process P1 for an additional
time quantum.
Inter Process communication:

 Process synchronization refers to the idea that multiple processes are to join up or
handshake at a certain point, in order to reach an agreement or commit to a certain
sequence of action. Coordination of simultaneous processes to complete a task is known as
process synchronization.
 The critical section problem
 Consider a system , assume that it consisting of n processes. Each process having a segment
of code. This segment of code is said to be critical section.
 E.G: Railway Reservation System.
 Two persons from different stations want to reserve their tickets, the train number,
destination is common, the two persons try to get the reservation at the same time.
Unfortunately, the available berths are only one; both are trying for that berth.
 It is also called the critical section problem. Solution is when one process is executing in its
critical section, no other process is to be allowed to execute in its critical section.
Ctd……

 The critical section problem is to design a protocol that the processes


can use to cooperate. Each process must request permission to enter
its critical section.
 The section of code implementing this request is the entry section.
The critical section may be followed by an exit section. The remaining
code is the remainder section.
CTD….
Ctd…..

 A solution to the critical section problem must satisfy the following 3


requirements:
 1.mutual exclusion:
 Only one process can execute their critical section at any time.
 2. Progress:
 When no process is executing a critical section for a data, one of the processes wishing to
enter a critical section for data will be granted entry.
 3. Bounded wait:
 No process should wait for a resource for infinite amount of time.
 Critical section:
 The portion in any program that accesses a shared resource is called as critical section (or)
critical region.
Race conditions

 When several processes access and manipulate the same data


concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a race
condition.
Synchronization hardware

 In a uniprocessor multiprogrammed system, mutual exclusion can be obtained by disabling


the interrupts before the process enters its critical section and enabling them after it has
exited the critical section.
 Disable interrupts
 Critical section
 Enable interrupts
 Once a process is in critical section it cannot be interrupted. This solution cannot be used in
multiprocessor environment. since processes run independently on different processors.
 In multiprocessor systems, Testandset instruction is provided,it completes execution without
interruption. Each process when entering their critical section must set lock,to prevent other
processes from entering their critical sections simultaneously and must release the lock when
exiting their critical sections.
CTD….

 do {
 acquire lock
 critical section
 release lock
 remainder section
 } while (TRUE);
CTD…

 A process wants to enter critical section and value of lock is false then testandset
returns false and the value of lock becomes true. thus for other processes wanting
to enter their critical sections testandset returns true and the processes do busy
waiting until the process exits critical section and sets the value of lock to false.
 • Definition:

 boolean TestAndSet(boolean&lock){
 boolean temp=lock;
 Lock=true;
 return temp;
 }
CTD…

 Algorithm for TestAndSet


 do{
 while testandset(&lock)
 //do nothing
 //critical section
 lock=false
 remainder section
 }while(TRUE);
 Swap instruction can also be  Algorithm
used for mutual exclusion  do
 Definition  {
 Void swap(boolean &a, boolean &b)  key=true;
  while(key=true)
{
 swap(lock,key);
 boolean temp=a;
 critical section
 a=b;
 lock=false;
 b=temp;  remainder section
 }  }while(1);
 lock is global variable initialized to false.each process has a local variable key. A process wants to
enter critical section,since the value of lock is false and key is true.
 lock=false
 key=true
 after swap instruction,
 lock=true
 key=false
 now key=false becomes true,process exits repeat-until,and enter into critical section.
 When process is in critical section (lock=true),so other processes wanting to enter critical section
will have
 lock=true key=true
 Hence they will do busy waiting in repeat-until loop until the process exits critical section and sets
the value of lock to false.
Peterson’s solution:

 Peterson solution is one of the solutions to critical section problem involving two processes. This solution states that when
one process is executing its critical section then the other process executes the rest of the code and vice versa.
 Peterson solution requires two shared data items:
 1) turn: indicates whose turn it is to enter into the critical section. If turn == i ,then process i is allowed into their critical
section.
 2) flag: indicates when a process wants to enter into critical section. When process i wants to enter their critical section,
it sets flag[i] to true.

 do {flag[i] = TRUE; turn = j;


 while (flag[j] && turn == j);
 critical section
 flag[i] = FALSE;
 remainder section
 } while (TRUE);
Peterson’s solution ctd…

 Semaphores
 A semaphore is an integer variable.semaphore accesses only through two operations.
 1) wait: wait operation decrements the count by1.
 If the result value is negative,the process executing the wait operation is blocked.
 2) signaloperation:
 Signal operation increments by 1,if the value is not positive then one of the process blocked in wait operation unblocked.
 wait (S) {
 while S <= 0 ; // no-op
 S--;
 }
 signal (S)
 {
 S++;
 }
Ctd…

 do {
 wait (mutex);
 // Critical Section
 signal (mutex);
 // remainder section
 } while (TRUE);
 First process that executes wait operation will be immediately granted sem.count to 0.
 If some other process wants critical section and executes wait() then it is
blocked,since value becomes -1. If the process exits critical section it executes
signal().sem.count is incremented by 1.blocked process is removed from queue and
added to ready queue
Ctd…

 Problems:
 1) Deadlock

 Deadlock occurs when multiple processes are blocked.each waiting for a resource that can only be freed by one
of the other blocked processes.
 2) Starvation

 one or more processes gets blocked forever and never get a chance to take their turn in the critical section.
 3) Priority inversion

 If low priority process is running ,medium priority processes are waiting for low priority process,high priority
processes are waiting for medium priority processes.this is called Priority inversion.
 The two most common kinds of semaphores are counting semaphores and binary semaphores. Counting
semaphores represent multiple resources, while binary semaphores, as the name implies, represents two
possible states (generally 0 or 1; locked or unlocked).
Ctd…

 Classic problems of synchronization


 1) Bounded-buffer problem

 Two processes share a common ,fixed –size buffer.


 Producer puts information into the buffer, consumer takes it out.
 The problem arise when the producer wants to put a new item in the
buffer,but it is already full. The solution is for the producer has to wait until
the consumer has consumed atleast one buffer. similarly if the consumer
wants to remove an item from the buffer and sees that the buffer is empty,it
goes to sleep until the producer puts something in the buffer and wakes it
up.
Ctd….

 The structure of the producer  The structure of the consumer


process process
 do {  do {
 // produce an item in nextp wait  wait (full); wait (mutex);
(empty);
 // remove an item from buffer to
 wait (mutex); nextc signal (mutex);
 // add the item to the buffer signal  signal (empty);
(mutex);
 // consume the item in nextc
 signal (full);
 } while (TRUE);
 } while (TRUE);
Ctd….

 2) The readers-writers problem


 A database is to be shared among several concurrent processes.some
processes may want only to read the database,some may want to
update the database.If two readers access the shared data
simultaneously no problem.if a write,some other process access the
database simultaneously problem arised.Writes have excusive access
to the shared database while writing to the database.This problem is
known as readers- writes problem.
Ctd….

 First readers-writers problem


 No reader be kept waiting unless a writer has already obtained permission to use the
shared resource.
 Second readers-writes problem:
 Once writer is ready,that writer performs its write as soon as possible.
 A process wishing to modify the shared data must request the lock in write mode.
multiple processes are permitted to concurrently acquire a reader-writer lock in read
mode. A reader writer lock in read mode. but only one process may acquire the lock for
writing as exclusive access is required for writers.
 Semaphore mutex initialized to 1
 o Semaphore wrt initialized to 1
 o Integer read count initialized to 0
Ctd…..

  The structure of a reader process


The structure of a writer
 do {
process
 wait (mutex) ;
 do {  readcount ++ ;
 wait (wrt) ;  if (readcount == 1)
 wait (wrt) ;
 // writing is performed  signal (mutex)
 signal (wrt) ;  // reading is performed wait (mutex) ;
 readcount - - ;
 } while (TRUE);  if (readcount == 0)
 signal (wrt) ;
 signal (mutex) ;
 } while (TRUE);
3) Dining Philosophers problem

 Five philosophers are seated on 5 chairs across a table. Each


philosopher has a plate full of noodles. Each philosopher needs a pair of
forks to eat it. There are only 5 forks available all together. There is only
one fork between any two plates of noodles.
 In order to eat, a philosopher lifts two forks, one to his left and the
other to his right. if he is successful in obtaining two forks, he starts
eating after some time, he stops eating and keeps both the forks down.
Dining philosopher problem ctd…

 What if all the 5 philosophers decide to eat at the same time ?


 All the 5 philosophers would attempt to pick up two forks at the same
time. So,none of them succeed.
 One simple solution is to represent each fork with a semaphore.a
philosopher tries to grab a fork by executing wait() operation on that
semaphore.he releases his forks by executing the signal()
operation.This solution guarantees that no two neighbours are eating
simultaneously.
 Suppose all 5 philosophers become hungry simultaneously and each
grabs his left fork,he will be delayed forever.
Ctd….

 The structure of Philosopher i:


 do{
 wait ( chopstick[i] );
 wait ( chopStick[ (i + 1) % 5] );
 // eat
 signal ( chopstick[i] );
 signal (chopstick[ (i + 1) % 5] );
 // think
 } while (TRUE);
Ctd…

 Several remedies:
 1) Allow at most 4 philosophers to be sitting simultaneously at the
table.
 2) Allow a philosopher to pickup his fork only if both forks are available.
 3) An odd philosopher picks up first his left fork and then right fork. an
even philosopher picks up his right fork and then his left fork.

END

You might also like