[go: up one dir, main page]

0% found this document useful (0 votes)
46 views58 pages

Cpscheduling

The document discusses CPU scheduling, outlining various types of scheduling algorithms such as First Come First Served, Shortest Job First, Priority Scheduling, and Round Robin. It explains the roles of long-term, short-term, and medium-term schedulers, as well as the dispatcher, and highlights the criteria for evaluating scheduling performance. Additionally, it covers advanced concepts like multilevel queue and multilevel feedback queue scheduling, emphasizing their complexities and applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views58 pages

Cpscheduling

The document discusses CPU scheduling, outlining various types of scheduling algorithms such as First Come First Served, Shortest Job First, Priority Scheduling, and Round Robin. It explains the roles of long-term, short-term, and medium-term schedulers, as well as the dispatcher, and highlights the criteria for evaluating scheduling performance. Additionally, it covers advanced concepts like multilevel queue and multilevel feedback queue scheduling, emphasizing their complexities and applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 58

Unit: 2

CPU Scheduling

By: Parveen Kaur


CPU Scheduling
 Types of Scheduling
 Scheduling Algorithms, Scheduling criteria
 CPU scheduler - preemptive and non
preemptive, Dispatcher, First come first
serve, Shortest job first, Round robin, Priority
 Multi level feedback queue
 Multiprocessor scheduling
 Real time scheduling
 Thread scheduling
Process Scheduler

 Long-term scheduler:
 Bring process to Ready state
 Short-term scheduler:
 Dispatcher/ CPU scheduler
 Bring process in Running state
 Medium-term scheduler
 Swapping out and Swapping in the process
from main memory to secondary memory.
What is a CPU Scheduling?

 CPU Scheduling is a process of determining


which process will own CPU for execution while
another process is on hold.
 The main task of CPU scheduling is to make sure
that whenever the CPU remains idle, the OS at
least select one of the processes available in the
ready queue for execution.
 The selection process will be carried out by the
CPU scheduler. It selects one of the processes in
memory that are ready for execution.
CPU Scheduling
 CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state


2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 Scheduling under 1 and 4 is non-preemptive
 All other scheduling is preemptive – implications for data
sharing between threads/processes
 Types of CPU scheduling:
 Preemptive
 Non-preemptive / Cooperative.
Dispatcher

 Dispatcher module gives control of the CPU to the


process selected by the scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user
program to restart that program
 Dispatch latency – time it takes for the dispatcher
to stop one process and start another running
Scheduling Criteria
Different CPU-scheduling algorithms: choice depends upon the
properties:
 CPU utilization – keep the CPU as busy as possible.(still not
100%)
 Throughput – no. of processes that complete their execution per
time unit
 Turnaround time – Amount of time to execute a particular process.
(Ready to Terminated state)
 Includes waiting time to get into memory, in ready queue, CPU
execution and doing I/O.
 Waiting time – amount of time a process has been waiting in the
ready queue.(sum of all periods spend in Ready queue)
 Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
Scheduling Algorithm

 Optimization Criteria is based on:


 Maximum CPU utilization
 Maximum Throughput
 Minimum Turnaround time
 Minimum Waiting time
 Minimum Response time

Turnaround time= completion time – arrival time


Or
Burst time + waiting time
Scheduling Algorithm Types

 First come, First Served Scheduling (FCFS)


 Shortest-Job-First Scheduling (SJF)
 Priority Scheduling
 Round-Robin Scheduling
 Multilevel Queue Scheduling
 Multilevel Feedback Queue Scheduling
First Come First Served Scheduling
 When process enters the Ready queue, its PCB is linked on tail of
queue.
 CPU is allocated to the process at the head of queue.
Process Burst Time
P1 24(milliseconds)
P2 3(milliseconds)
P3 3(milliseconds)
 Suppose that the processes arrive in the order: P , P , P
1 2 3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30
 Waiting Time for P = 0; P = 24 P = 27
1 2 ; 3
 Average Waiting Time: (0+24+27)/3=17 milliseconds.
First Come First Served Scheduling contd..
Suppose that the processes arrive in the order:
P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3 milliseconds
 Much better than previous case

 Average waiting time under FCFS Scheduling is often quite


long
 Convoy effect, Non-preemptive
Shortest-Job-First Scheduling
 Associate with each process the length of its
next CPU burst. Use these lengths to schedule
the process with the shortest time.
 Also known as Shortest-next-CPU-burst.
Consider only CPU burst time of process rather
then its total length.
 If two processes have same CPU burst, FCFS
scheduling will used.
 SJF is optimal – gives minimum average
waiting time for a given set of processes
 The difficulty is knowing the length of the
next CPU request.
Shortest-Job-First Scheduling contd..
Process Arrival T Burst Time
P1 0.0 6(milliseconds)
P2 2.0 8(milliseconds)
P3 4.0 7(milliseconds)
P4 5.0 3(milliseconds)
 SJF scheduling chart
P4 P1 P3 P2

0 3 9 16 24

 Waiting time for P1 = 3 ; P2 = 16 ; P3 = 9;; P4 = 0


 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7ms
 Average waiting time for FCFS= (0 + 6 + 14 +21) / 4 = 10.25ms
Determining Length of Next CPU Burst
 Can only estimate the length
 Can be done by using the length of previous CPU bursts,
using exponential averaging
Determining Length of Next CPU Burst
1. t n actual length of n th CPU burst
2.  n 1 predicted value for the next CPU burst
3.  , 0  1
4. Define : n 1 t n  (1   ) n
 =0
n+1 = n
Recent history does not count.

 =1
n+1 =  tn
Only the actual last CPU burst counts.

If we expand the formula, we get:


n+1 =  tn+(1 - ) tn-1+ ……… +(1 -  )j  tn -j + …+(1 -  )n +1 0

Since both  and (1 - ) are less than or equal to 1, each successive


term has less weight than its predecessor
Shortest-Job-First Scheduling contd..
 SJF scheduling can be either preemptive or non-
preemptive.
 Choice arises when a new process arrives at ready
queue while a previous process is still executing.
 If next CPU burst time is less then remaining
time of executing process, preemptive SJF
algorithm will preempt the currently executing
process.
 Also called as shortest-remaining-time-first
scheduling .
 A non-preemptive SJF algorithm will allow
currently running process to finish its CPU burst.
Preemptive and Nonpreemptive SJF
Process Burst Time Arrival Time
P1 8 0
P2 4 1
P3 9 2
P4 5 3
 Preemptive SJF scheduling chart

P1 P2 P4 P1 P3

0
 Preemptive SJF 1
Average 10
5waiting time: 17 26
[(10-1)+(1-1)+(17-2)+(5-3)]/4=26/4=6.5ms
 Non-preemptive average waiting time: [0+(8-1)+(17-
2)+(12-3)]/4 =31/4=7.75ms
Priority Scheduling
 A priority number (integer) is associated with each process.
 The CPU is allocated to the process with the highest priority
(smallest integer  highest priority)
 Preemptive
 Non-preemptive
 Equal priority processes are scheduled in FCFS order.
 SJF algorithm is a priority scheduling where priority is the
predicted next CPU burst time
 Problem  Starvation – low priority processes may never
execute
 Solution  Aging – as time progresses increase the priority of
the process
Priority Scheduling contd..
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
 Gantt chart:

P2 P5 P1 P3 P4

0 1 6 16 18 19

 Average Waiting time: (6+0+16+18+1)/5 = 8.2 ms


 Priorities can be defined either Internally or Externally.
 Priority Scheduling can be preemptive or non-preemptive.
Priority Scheduling contd..
 Internally defined Priority: Compare quantities:
 Time limits, memory requirements, no. of open files, ration of
average I/O burst to average CPU burst
 Externally defined Priority: Based on criteria set outside
operating system:
 Importance of process, type and amount of funds being paid for
computer use.
 Preemptive Priority Scheduling: It Preempt CPU if
newly arrived process in Ready queue is of higher priority.
 Non-preemptive Priority Scheduling: Simply put the
new process at the head of Ready queue.
Round Robin Scheduling
 Especially designed for time-sharing systems.
 Similar to FCFS scheduling but preemption added(time based)
 Each process gets a small unit of CPU time (time quantum or
time slice), usually 10-100 milliseconds.
 If process has CPU burst less than 1 time quantum, process itself
release the CPU voluntarily. Scheduler will proceed to next
process in ready queue.
 If CPU burst is more than 1 time quantum , interrupt will created
to process. Context switch will be executed and process will
placed on tail of ready queue (circular FIFO).
 We can predict wait time: If there are n processes in the ready
queue and the time quantum is q, then each process gets 1/n
of the CPU time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
Round Robin Scheduling contd..
 Time Quantum is 4 milliseconds
Process Burst Time
P1 24
P2 3
P3 3

 The Gantt chart is:


P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

 Average waiting time: [(10-4)+4+7)/3 = 17/3= 5.66ms.


 Typically, higher average turnaround than SJF, but better
response
Round Robin Scheduling contd..
 Performance: Heavily depends upon size of time slice.
 If time slice is extremely large, it will work same as FCFS
scheduling, (q large  FIFO).
 If time slice is extremely small (q=1ms), large no. of context
switches.

 Thus, time quantum should be greater w.r.t. context switch


time(fraction of quantum time)
Round Robin Scheduling contd..
 Turnaround time also depends on the size of the time
quantum (not on size only).
 It can improved if process finish CPU burst in single time
quantum.
Multilevel Queue Scheduling
 Ready queue is partitioned into separate queues on the basis of
type, priority or memory size of process:
 foreground (interactive) processes
 background (batch) processes

 Separate response-time requirements of each queue, so different


scheduling required.
 Each queue has its own scheduling algorithm:
 foreground – RR
 background – FCFS

 Scheduling must be done between the queues:


 Fixed priority preemptive scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
 20% to background in FCFS
Multilevel Queue Scheduling contd..
 Each queue has absolute priority over lower-priority queues.
Multilevel Queue Scheduling contd..
Process AT BT Queue Priority
P1 0ms 4ms 1
P2 0ms 3ms 1
P3 0ms 8ms 2
P4 10ms 5ms 1
Queue priority 1 follow Round Robin time quantum 2ms
Queue priority 2 follow FCFS.
Gant chat:
P4 P4
P1 P2 P1 P2 P3 P4 P3

0 2 4 6 7 10 12 14 15 20
Multilevel Feedback Queue Scheduling
 A process can move between the various queues: On the
basis of its CPU burst.
 If process uses too much CPU time, it will be moved to a lower-
priority queue.
 This scheme leaves I/O bound and interactive processes in the
higher-priority queue.
 Process waiting too long in a lower-priority may be moved to
higher-priority queue. Aging can implemented in this way.

 Example: Three queues:


 Q0 – time quantum 8 milliseconds
 Q1 – time quantum 16 milliseconds
 Q2 – FCFS
Multilevel Feedback Queue Scheduling
 Scheduling when job entered in Ready queue:
 A new job enters queue Q0 , When it gains CPU, job receives 8
milliseconds. If it does not finish in 8 milliseconds, job is moved
to tail of queue Q1.
 At Q1 job is again served and receives 16 additional milliseconds.
If it still does not complete, it is preempted and moved to queue
Q2.
 At Q2 jobs are served on FCFS basis.
Multilevel Feedback Queue Scheduling
 Multilevel-feedback-queue scheduler defined by the following
parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will
enter when that process needs service
 It is most general CPU-scheduling algorithm, unfortunately, it
is also the most complex algorithm (required some means to
select values for all the parameters.
Thread
 A thread is also known as lightweight process.
 The idea is to achieve parallelism by dividing a process into
multiple threads.
 For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text,
another thread to process inputs, etc.
 The operating system creates and manages threads, and they
share the same memory and resources as the program that
created them.
 Process vs Thread:
 Process means a program is in execution, whereas thread means
a segment of a process.
 The primary difference is that threads within the same process
run in a shared memory space, while processes run in separate
memory spaces.
Process vs Thread:
 Threads are not independent of one another like processes
are, and as a result threads share with other threads their code
section, data section, and OS resources (like open files and
signals). But, like process, a thread has its own program
counter (PC), register set, and stack space.
Types of threads:
 User Threads: small, faster, implemented by user. Kernel not
aware
 Kernel Level Threads: slower, implemented by kernel
(operating system)

 Multithreading Models:
 One to one model
 Many to one model
 Many to many model
 Hybrid model
Thread Scheduling
 Scheduling of user level threads

(ULT) to kernel level threads


(KLT) via lightweight process
(LWP) by the application developer.
 Scheduling of kernel level threads

by the system scheduler to


perform different unique OS
functions.
 Thread scheduling required to controls:
 Contention Scope
 Allocation Domain
Contention Scope
 The word contention here refers to the competition or fight
among the User level threads to access the kernel resources:
 Process Contention Scope (PCS) – The contention takes
place among threads within a same process. (for CPU use)
 The thread library schedules the high-prioritized PCS thread to
access the resources via available LWPs (priority as specified by
the application developer during thread creation).
 Many-to-one and many-to-many models, thread library schedules
user-level threads to run on LWP
 System Contention Scope (SCS)– The contention takes
place among all threads in the system. (for CPU use)
 In this case, every SCS thread is associated to each LWP by the
thread library and are scheduled by the system scheduler to
access the kernel resources. (one-to-one model)
Pthread Scheduling
 Pthread: API for thread creation and synchronization
defined by POSIX standard
 API allows specifying either PCS or SCS during
thread creation
 PTHREAD SCOPE PROCESS schedules threads using PCS
scheduling
 PTHREAD SCOPE SYSTEM schedules threads using SCS
scheduling.
Allocation Domain
 The allocation domain is a set of one or more resources for
which a thread is competing.
 In a multicore system, there may be one or more allocation
domains where each consists of one or more cores.
 One ULT can be a part of one or more allocation domain.
 Due to this high complexity in dealing with hardware and software
architectural interfaces, this control is not specified. But by default,
the multicore system will have an interface that affects the
allocation domain of a thread.
 Consider a scenario, 3 processes-P1, P2, P3 and 10 user level threads
(T1 to T10) with a single allocation domain. The amount of CPU
resources allocated to each process and thread depends on the
contention scope, scheduling policy and priority of each thread defined
by the application developer using thread library and also depends on
the system scheduler. These User level threads are of a different
contention scope.
Allocation Domain Contd..
Allocation Domain contd..
 Process P1: All PCS threads T1, T2, T3 of Process P1 will
compete among themselves. The PCS threads of the same
process can share one or more LWP. T1 and T2 share an LWP
and T3 are allocated to a separate LWP.
 Between T1 and T2 allocation of kernel resources via LWP is
based on preemptive priority scheduling by the thread library. A
Thread with a high priority will preempt low priority threads.
 Whereas, thread T1 of process P1 cannot preempt thread T3
of process P1 even if the priority of T1 is greater than the
priority of T3. If the priority is equal, then the allocation of ULT
to available LWPs is based on the scheduling policy of threads
by the system scheduler(not by thread library, in this case).
Allocation Domain contd..
 Process P2: Both SCS threads T4 and T5 of process P2 will compete
with processes P1 as a whole and with SCS threads T8, T9, T10 of
process P3. The system scheduler will schedule the kernel resources
among P1, T4, T5, T8, T9, T10, and PCS threads (T6, T7) of process P3
considering each as a separate process. Here, the Thread library has no
control of scheduling the ULT to the kernel resources.
 Process P3: Combination of PCS and SCS threads. Consider if the
system scheduler allocates 50% of CPU resources to process P3, then
25% of resources is for process scoped threads and the remaining 25%
for system scoped threads. The PCS threads T6 and T7 will be allocated
to access the 25% resources based on the priority by the thread library.
The SCS threads T8, T9, T10 will divide the 25% resources among
themselves and access the kernel resources via separate LWP and KLT.
The SCS scheduling is by the system scheduler.
Real time Scheduling

 A real-time operating system (RTOS) serves real-


time applications that process data without any
buffering delay.
 It is a time-bound system (time constraint),
processing must be done inside the specified
constraints. Otherwise, the system will fail.
 Soft Real-Time scheduling which does not
guarantee when a critical real-time process will be
scheduled.
 Hard Real-Time scheduling in which the process
must be scheduled before the deadline.
Points to Remember in RTS Algorithm
 Processes are considered periodic i.e., the process will
repeat itself after a fixed period of time. The period of a
process is denoted by p.
 The processing time t i.e., the time for which the process
requires the CPU within each period ( burst time).
 the deadline d, i.e., the time before which the process
must be serviced.
 0≤t≤d≤p
Real Time Scheduling Algorithm
 Rate Monotonic Scheduling: priorities
assigned to processes on the basis of their
periods.
 SoftReal Time Scheduling: A preemptive priority
based scheduler is used.
 Hard Real Time Scheduling: Admission control
algorithm is used. (process accepted or rejected
on the basis of deadline).
 Earliest Deadline First Scheduling :
Dynamically assigns priorities according to
deadline.
Rate Monotonic Scheduling
 A static priority policy with preemption is used.
 Whenever a high priority process arrives it will preempt a
lower priority process.
 Every process gets

the priority according


to its period. Lower the
period higher is the
priority.
 Also, the processing

time remains the same


in each period.
Rate Monotonic Scheduling contd..
 Suppose there are two processes, P1 and P2. The
periods for P1 and P2 are 50s and 100s. The
processing times are t1 = 20s for P1 and t2 = 35s
for P2. The deadline for each process requires that it
complete its CPU burst by the start of its next period.
Process Period/Deadline CPU Burst Time
P1 50s 20s

P2 100s 35s

 Since, it is mentioned that deadline for each process


requires that it complete its CPU burst by the start of
its next period, therefore, the deadline is same as the
period of a process.
Rate Monotonic Scheduling contd..
 The priority of P1 is higher than P2 because P1 has
smaller value for period. At time 0, P1 will begin.
 P1 will finish at 20s and then P2 will begin its execution.
At time 50 P1’s second period will start. At the same time
P2 has completed 30s only.
 P1 will preempt P2 because P1 has higher priority and
runs from 50-70s. P2 completes the remaining 5s from
70-75s but still meets its deadline of 100.
 The system remains idle till 100, when the 3rd period of
P1 and second period of P2 will start and P1 will begin
the execution at that time. The rest of the execution will
follow the same execution order as was from 0-100.
Rate Monotonic Scheduling contd..
 Failure of rate Monotonic Scheduling:
 Assume that process P1 has a period of p1 = 50s and a
CPU burst of t1 = 25s. For P2, the corresponding values
are p2 = 80s and t2 = 35s
Process Period CPU Burst Time
p1 50s 25s
p2 80s 35s
 P1 will execute till 25s and then P2 will begin execution
and run till 50s when P1 will begin its second period. Till
time 50s P2 will finish 25s and will be left with 10s more.
 Now, the second period of P1 will be between 50 – 75.
Now, P2 will requires 10nsec more to finish but only
5nsec are left for the deadline (which is 80). Hence, P2 is
not able to finish its execution within the deadline.
Earliest Deadline First Scheduling
 This algorithm assigns priority to the process based on the
deadline. Earlier the deadline, higher is the priority. Thus, the
priorities keep on changing in this scheduling.
 Assume that process P1 has a period of p1 = 50s and a CPU
burst of t1 = 25s. For P2, the corresponding values are p2 =
80s and t2 = 35s.
 Deadline for P1 (50s) is earlier than P2 (80s) hence, the initial
priority of P1 is higher than P2. So, P1 executes from 0-25s.
 P2 starts at 25s. At 50s P1 also arrives for second period. At
this point the deadline of P2(80s) is earlier than the deadline
of P1 (which is 100s for second period), hence, P2 will
continue till 60s.
Earliest Deadline First Scheduling
 P1 starts at 60s. P2 arrives for second period at 80s. At
this point P1 is having higher priority because deadline for
P1 is 100s whereas for P2 it is 160s.
 P1 continues will 85s after which P2 starts. At 100s P2 is
preempted because priority of P1 is higher as the
deadline for its third period is 150s (which is earlier than
deadline of P2 which is 160s). So P1 runs from 100-125s
and then P2 completes its second period from 125-145s.
The system remains idle till 150s and then P1 resumes its
next period.
Multiprocessor Scheduling
 In multiple-processor scheduling multiple CPU’s are
available and hence Load Sharing becomes possible.
 However multiple processor scheduling is
more complex as compared to single processor
scheduling.
 In multiple processor scheduling there are cases when
the processors are identical i.e. HOMOGENEOUS, in
terms of their functionality, we can use any processor
available to run any process in the queue.
 Approaches to Multiple-Processor Scheduling
 Asymmetric Multiprocessing
 Symmetric Multiprocessing
Multiprocessor Scheduling contd..

 Asymmetric Multiprocessing: all the scheduling


decisions and I/O processing are handled by a single
processor which is called the Master Server and the other
processors executes only the user code. This is simple
and reduces the need of data sharing.
 Symmetric Multiprocessing (SMP): each processor
is self scheduling. All processes may be in a common
ready queue or each processor may have its own private
queue for ready processes. The scheduling proceeds
further by having the scheduler for each processor
examine the ready queue and select a process to
execute. Several issues concern for SMP : Process
affinity, load balancing, multicore processors, memory stall
Processor Affinity
 Processor Affinity means a processes has an affinity for the
processor on which it is currently running.
 When a process runs on a specific processor there are certain
effects on the cache memory. The data associated with the
process is populated into cache memory.
 If the process migrates to another processor, the contents of
the cache memory must be invalidated and needs to
repopulated for new process.
 Because of the high cost of invalidating and repopulating
caches, most of the SMP(symmetric multiprocessing) systems
try to avoid migration of processes from one processor to
another and try to keep a process running on the same
processor. This is known as PROCESSOR AFFINITY.

Processor Affinity contd..

 Soft Affinity: When an operating system has a


policy of attempting to keep a process running on
the same processor but not guaranteeing it will do
so, this situation is called soft affinity.

 Hard Affinity: Hard Affinity allows a process to


specify a subset of processors on which it may run.
Some systems such as Linux implements soft
affinity but also provide some system calls
like sched_setaffinity() that supports hard affinity.
Load Balancing
 Load Balancing is the phenomena which keeps
the workload evenly distributed across all processors in
an SMP system.
 Load balancing is necessary only on systems where each
processor has its own private queue of process which are
eligible to execute.
 It is unnecessary on system having common queue
because once a processor becomes idle it immediately
extracts a runnable process from the common run queue.
 On SMP, it is important to keep the workload balanced
among all processors to fully utilize the benefits of having
more than one processor else one or more processor will
sit idle while other processors have high workloads along
with lists of processors awaiting the CPU.
Load Balancing Contd..
 Push Migration: In push migration a task routinely
checks the load on each processor and if it finds an
imbalance then it evenly distributes load on each
processors by moving the processes from overloaded
to idle or less busy processors.

 Pull Migration: Pull Migration occurs when an idle


processor pulls a waiting task from a busy processor
for its execution.
Multicore Processors
 In multicore processors: multiple processor cores are
places on the same physical chip.
 Each core has a register set to maintain its architectural
state and thus appears to the operating system as a
separate physical processor.
 SMP systems that use multicore processors are faster
and consume less power than systems in which each
processor has its own physical chip.
 However multicore processors may complicate the
scheduling problems. When processor accesses memory
then it spends a significant amount of time waiting for the
data to become available. This situation is
called MEMORY STALL.
Multicore Processors contd..
 It occurs for various reasons such as cache miss, which is
accessing the data that is not in the cache memory.
 In such cases the processor can spend up to 50% of its
time waiting for data to become available from the
memory.

 To solve this problem recent hardware designs have


implemented multithreaded processor cores in which
two or more hardware threads are assigned to each core.
Therefore if one thread stalls while waiting for the
memory, core can switch to another thread.
Assignment

 Advantages and Disadvantages of each type


of scheduling.

You might also like