Group 2 OS Work
Group 2 OS Work
GROUP II
LECTURER: MADAM BARBARA ASINGIRWE KABWIGA
GROUP MEMBERS
Process Management: The operating system is responsible for the following activities in
connection with process management: the creation and deletion of both user and
system processes; the scheduling of processes; and the provision of mechanisms for
synchronization, communication, and deadlock handling for processes.
Process, on the other hand, includes:
Current value of Program Counter (PC)
Contents of the processors registers
Value of the variables
The processes stack (SP) which typically contains temporary data such as subroutine
parameter, return address, and temporary variables.
A data section that contains global variables
Process Control Block
. Scheduling Objectives
Maximize throughput.
Maximize number of users receiving acceptable response times.
Be predictable.
Balance resource use.
Avoid indefinite postponement.
Enforce Priorities.
Give preference to processes holding key resources
CTD…..
people live in rooms. Process are present in rooms knows as queues. There are 3types
1. job queue: when processes enter the system, they are put into a job queue, which
consists all processes in the system. Processes in the job queue reside on mass storage and
await the allocation of main memory.
2. ready queue: if a process is present in main memory and is ready to be allocated to
cpu for execution, is kept in readyqueue.
3. device queue: if a process is present in waiting state (or) waiting for an i/o event to
complete is said to bein device queue.(or)
The processes waiting for a particular I/O device is called device queue.
Schedulers :
Different CPU scheduling algorithms have different properties and may favor one class of
processes over another. In choosing which algorithm to use in a particular situation, we must
consider the properties of the various algorithms.
Many criteria have been suggested for comparing CPU scheduling algorithms.
Criteria that are used include the following:
CPU utilization: CPU must be kept as busy as possible. CPU utilization can range from 0 to 100 percent. In
a real system, it should range from 40 to 90 percent.
Throughput: The number of processes that are completed per time unit.
Turn-Around Time: It is the interval from the time of submission of a process to the time of completion.
Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue,
executing on the CPU and doing I/O.
Waiting time: It is the amount of time that a process spends waiting in the ready queue.
Response time: It is the time from the submission of a request until the first response is produced.
Interactive systems use response time as its measure.
Scheduling Algorithms
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for. example,
I/O request, or invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state (for example,
completion of I/O).
4. When a process terminates.
Try this Qn: Suppose we use the example above but with the processes arrived in the order P2, P3, P1. What is the average
waiting time?
The processes coming at P2, P3, P1 the average waiting time (6 + 0 + 3)/3 = 3 milliseconds whereas the processes are came in the order
P1, P2, P3 the average waiting time is 17 milliseconds
Shortest-Job-First Scheduling (SJF)
tn be the length of the nth CPU burst (i.e. contains the most recent information).
stores the past history.
be our predicted value for the next CPU burst.
α controls the relative weight of recent and past history in our prediction (0 ≤ α ≤1)
If α=0, then , recent history has no effect
If α=1 then , only the most recent CPU burst matters.
If α = 1/2, so recent history and past history are equally weighted
Round-Robin …ctd
Process synchronization refers to the idea that multiple processes are to join up or
handshake at a certain point, in order to reach an agreement or commit to a certain
sequence of action. Coordination of simultaneous processes to complete a task is known as
process synchronization.
The critical section problem
Consider a system , assume that it consisting of n processes. Each process having a segment
of code. This segment of code is said to be critical section.
E.G: Railway Reservation System.
Two persons from different stations want to reserve their tickets, the train number,
destination is common, the two persons try to get the reservation at the same time.
Unfortunately, the available berths are only one; both are trying for that berth.
It is also called the critical section problem. Solution is when one process is executing in its
critical section, no other process is to be allowed to execute in its critical section.
Ctd……
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
CTD…
A process wants to enter critical section and value of lock is false then testandset
returns false and the value of lock becomes true. thus for other processes wanting
to enter their critical sections testandset returns true and the processes do busy
waiting until the process exits critical section and sets the value of lock to false.
• Definition:
boolean TestAndSet(boolean&lock){
boolean temp=lock;
Lock=true;
return temp;
}
CTD…
Peterson solution is one of the solutions to critical section problem involving two processes. This solution states that when
one process is executing its critical section then the other process executes the rest of the code and vice versa.
Peterson solution requires two shared data items:
1) turn: indicates whose turn it is to enter into the critical section. If turn == i ,then process i is allowed into their critical
section.
2) flag: indicates when a process wants to enter into critical section. When process i wants to enter their critical section,
it sets flag[i] to true.
Semaphores
A semaphore is an integer variable.semaphore accesses only through two operations.
1) wait: wait operation decrements the count by1.
If the result value is negative,the process executing the wait operation is blocked.
2) signaloperation:
Signal operation increments by 1,if the value is not positive then one of the process blocked in wait operation unblocked.
wait (S) {
while S <= 0 ; // no-op
S--;
}
signal (S)
{
S++;
}
Ctd…
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
First process that executes wait operation will be immediately granted sem.count to 0.
If some other process wants critical section and executes wait() then it is
blocked,since value becomes -1. If the process exits critical section it executes
signal().sem.count is incremented by 1.blocked process is removed from queue and
added to ready queue
Ctd…
Problems:
1) Deadlock
Deadlock occurs when multiple processes are blocked.each waiting for a resource that can only be freed by one
of the other blocked processes.
2) Starvation
one or more processes gets blocked forever and never get a chance to take their turn in the critical section.
3) Priority inversion
If low priority process is running ,medium priority processes are waiting for low priority process,high priority
processes are waiting for medium priority processes.this is called Priority inversion.
The two most common kinds of semaphores are counting semaphores and binary semaphores. Counting
semaphores represent multiple resources, while binary semaphores, as the name implies, represents two
possible states (generally 0 or 1; locked or unlocked).
Ctd…
Several remedies:
1) Allow at most 4 philosophers to be sitting simultaneously at the
table.
2) Allow a philosopher to pickup his fork only if both forks are available.
3) An odd philosopher picks up first his left fork and then right fork. an
even philosopher picks up his right fork and then his left fork.
END