Unit 2 RTOS
Unit 2 RTOS
❖ Thread Scheduling
TASK: A task is a unit of execution within a job, which means one or more tasks
will make up a job.
In some operating systems, tasks and processes are used interchangeably.
• Release Time
• Release time of job is instant of time at which the job becomes available for execution. Job can be
scheduled and executed any time at or after its release time whenever data and control dependency
condition are met.
• Aperiodic and periodic : The period of a process is the time between successive executions.
Periodic processes have regular arrival times and hard deadlines. Aperiodic process have irregular
arrival times .
• Timing Constraints
Constraints imposed on the timing behavior of a job is timing constraints.
deadlines.
Hard and Soft Timing Constraints
• It is common to divide timing constraints into two types: hard and soft.
• Classification is based on the functions criticality of jobs, usefulness of late
result and deterministic or probabilistic nature of the constraints.
• Hard real time means strict about adherence to each task deadline. When an
event occurs, it should be serviced within the predictable time at all times in a
given hard real time system.
Hard Real-Time system: A hard real-time system guarantees that real-
time tasks be completed within their required deadlines. Failure to
meet a single deadline may lead to a critical system failure
Examples: air traffic control , vehicle subsystems control, medical
systems.
A hard-real time system is a system in which a failure to A soft real time system is a system in which one or more failures to meet the
meet even a single deadline may lead to complete or deadline is not considered as complete system failure but that performance
catastrophic system failure. is considered to be degraded.
Restrictive Nature
A Hard-real time system is very restrictive. A Soft real time system is not very restrictive.
Deadline
A Hard-real time system should not miss the deadline. A Soft real time system can miss the deadline occasionally. Missing the
Missing the deadline cause complete or catastrophic deadline is not considered as a complete system failure but degrades the
system failure. performance.
Utility
A hard-real time system has more utility. A soft real time system has less utility.
Examples
medium
long term short term
term
scheduling scheduling
scheduling
New
Long-term
Long-term scheduling
scheduling
Ready/
Suspend
Ready Running Exit
Medium-term Short-term
scheduling scheduling
Blocked/
Suspend
Blocked
Medium-term
scheduling
Ready
Blocked
Short Term
Blocked,
Suspend
Ready,
Suspend
Medium Term
Long Term
New Exit
Levels of Scheduling
Nonpreemptive Preemptive
◼ currently running process
may be interrupted and
◼ once a process is in the
moved to ready state by
running state, it will
the OS
continue until it terminates
or blocks itself for I/O ◼ preemption may occur
when new process arrives,
on an interrupt, or
periodically
Waiting and response time is less is more
• Scheduling Algorithms:
• Most Operating Systems today use very similar CPU time scheduling
algorithms, all based on the same basic ideas, but with Operating System-
specific adaptations and extensions.
• CPU scheduling is the task of selecting a waiting process from the ready
queue and allocating the CPU to it. The CPU is allocated to the selected
process by the dispatcher.
• A CPU scheduling algorithm should try to maximize the following:
1.CPU utilization
2.Throughput
• A CPU scheduling algorithm should try to minimize the following:
1.Turnaround time
2.Waiting time
3.Response time
Turnaround Time:
It is the time interval from the time of submission of a process to the time of the
completion of the process. The difference b/w Completion Time and Arrival
Time is called Turnaround Time.
Response Time:
The difference between the arrival time and the time at which the process first
gets the CPU is called Response Time.
Wait Time:
the total time that a process spends while waiting in a ready queue before
reaching the CPU.
• There are 4 types of Scheduling Algorithms.
• Multiprocessing systems contain more than one processor and share other resources.
• These systems offer the advantage of increased throughput due to multiple processors, are economical to work on, and
have increased reliability due to fault tolerance.
• Each processor has its own I/O devices and file system. Thus, there is very little interdependence among the processors.
A process started on a process runs to completion on that processor only,
• Since the resources are distributed among the individual OSs, there is minimal contention over OS resources. Another
benefit of this organization is its fault tolerance. If a single processor fails, there is no system failure as another
processor may take over the charge. The fault tolerance becomes easy as there is little coupling between the processors.
The disadvantage of this organization is that parallel execution of a single task is not possible.
MULTI-PROCESSOR SCHEDULING :
4 P3, P5
3 P0, P5
2 P2, P4
1 P1
0 P6
• 2. Space-sharing Scheduling:
• When a group of related processes or threads is created, the scheduler looks for available processors. If the
number of processors available is equal or more than the number of processes, then the processes are scheduled
on different processors. Otherwise, no process is started until enough number of processors is available.
• According to this scheduling, at an instant of time when there is need to schedule related processes on the
processors, a static partition can be done such that a group of related processes is assigned to one partition of
the processors. The number and size of partitions, however, may change with time as the number of
processes appear and get executed.
• Advantage: no context-switch time
• 3.Gang scheduling: This scheduling takes the benefits of both time sharing as well as space
sharing. This is helpful especially in scheduling the processes of same program or the multiple threads of a
process.
• When we consider only time sharing or space sharing, it may be possible that these threads are not able to
synchronizetheir communication. When one sends some message, the other thread has not even started or
received the message. To overcome this problem, gang scheduling is used. In this algorithm, a group or gang
of threads from the same process is scheduled as a unit, that is, all the threads in the gang are scheduled at the
same time and start to run on the processors.
• The scheduler schedules a gang of threads on the window in round- robin fashion, that is, on the processors in
first time quantum. All the processors in the window are scheduled synchronously. All threads in a window
start executing in parallel on different processors in the window. After expiry of the time quantum, another
gang in the global run queue is scheduled in the window.
• For example,
Real Time Scheduling concepts:
• Real time systems: A real-time system is one whose logical correctness depends on both the
correctness of the outputs and their timeliness.
• The tasks in the system must meet their defined deadlines for a specified time interval, otherwise the
purpose of the system is lost or the system performance is degraded.
REAL-TIME SCHEDULING: The following are the scheduling criteria in a real-time system:
• The timing constraints of the system must be met.
• In case of simultaneous access of shared resources and devices, the processes must be prevented.
• The cost of context switches, while pre-empting, must be reduced.
• The scheduling in real-time systems may be performed in the following ways: pre-emptively, non- pre-
emptively, statically, and dynamically. The RTOS should allocate and schedule tasks on processors in
order to ensure that deadlines are met. Some basic terms related to scheduling are Periodic, Aperiodic,
Release Time, Relative Deadline and Dynamic Scheduling etc.
• Rate monotonic(RM) scheduling algorithm is one of the most widely used real-time scheduling algorithms
executed on a uniprocessor environment. It is a static priority-based pre-emptive scheduling algorithm. The
task with the shortest period will always pre-empt the executing task.
• Earlier deadline first (EDF), also known as least time to go, is a uni-processor priority-driven scheduling
algorithm, in which the tasks need not be periodic. The scheduling criterion here is based on the absolute
deadline of the tasks. The task whose absolute deadline is earlier, is scheduled first, that is, the earlier the
deadline, the higher the priority of the task. It is a dynamicpriority scheduling algorithm, as the deadlines are
computed at run-time. The task priorities are not fixed as well, and may change, depending on the closeness
of their absolute deadlines.
Comparison of RMS and EDF