Short Notes Operating Systems 1 11
Short Notes Operating Systems 1 11
com
1
byjusexamprep.com
• System Call- It is a request made by a user program to the operating system to perform any
task.
• Program- It is a set of instructions, and by executing them step by step we can get any service
done, it is a passive entity and resides on the disk.
• Process- Instance of program or a program under execution. It is an active entity which resides
in the main memory.
Operations on Process: -
o Creating process
o Running, scheduling, Suspending, Resuming, Blocking
o Terminating a process
• Process Control Block- Process Control Block is defined as a data structure that contains all the
information related to a process. The process control block is also called task control block or entry
of the process table, etc.
2
byjusexamprep.com
• Attributes of PCB: -
o Process ID
o Program Counter status
o Process state
o Memory Management data
o CPU scheduling data
o CPU registers
o Address space for process
o Accounting data
o I/O information
o User’s Identification
o List of open files
o List of open devices
o Priority
• Threads- A thread may be defined as a flow of execution through some process code.
o A thread is also known as a lightweight process.
o Threads improve application’s performance by providing parallelism.
o Threads improve the performance of operating systems by reducing the process overhead.
o Thread is equivalent to a classical process and it is a software approach for software
improvement.
o Every thread belongs to only one process and not a single thread can exist outside a process.
3
byjusexamprep.com
• Types of Threads-
o There are two types of threads, which are:
- User Level Threads
- Kernel Level Threads
User thread are implemented by users. kernel threads are implemented by OS.
OS doesn’t recognize user level threads. Kernel threads are recognized by OS.
User level threads are designed as dependent Kernel level threads are designed as
threads. independent threads.
Process Thread
Process switching needs interaction with the Thread switching doesn’t require any
OS. interaction with the operating system.
4
byjusexamprep.com
Process Thread
If one process gets blocked, then another Whereas if one thread gets blocked and
process cannot run until the first process is waiting, a second thread of the same task
unblocked. can execute.
Multiple processes without threads needs more Multi-threaded processes use less
resources. resources.
In multiple processes each process operates Thread can read, write or alternate
independently of the others. another thread's data.
• Schedulers in OS- Schedulers in OS are special system software. They help in scheduling the
processes in various ways.
Types of Schedulers:
o Long-term Scheduler
- A long-term scheduler is to maintain a good degree of multiprogramming.
- Long-term scheduler is also known as Job Scheduler.
o Short-term Scheduler
- A short-term scheduler is to increase the system performance.
- Short-term scheduler is also known as CPU Scheduler
o Medium-term Scheduler
- A medium-term scheduler is to perform swapping
- The medium-term scheduler reduces the degree of multiprogramming.
• Dispatcher- The dispatcher is the module that gives control of the CPU to the process selected
by the scheduler. This function involves:
5
byjusexamprep.com
o Switching context.
o Switching to user mode.
o Jumping to the proper location in the newly loaded program.
The dispatcher needs to be as fast as possible, as it is run on every context switch. The time
consumed by the dispatcher is known as dispatch latency.
• Process Queues- The Operating system manages various types of queues for each of the process
states. The PCB related to the process is also stored in the queue of the same state. If the Process
is moved from one state to another state, then its PCB is also unlinked from the corresponding
queue and added to the other state queue in which the transition is made.
o Queues Maintained by Operating System:
- Job queue
- Ready queue
- Waiting queue
• Scheduling Criteria-
o CPU Utilization: CPU should be as busy as possible.
o Throughput: Number of processes completed per unit.
Throughput= N/L,
where N = number of processes,
and L = Maximum Completion Time – Minimum Arrival Time
o Arrival Time: Time at which the process arrives in the ready queue.
o Completion Time: Time at which process completes its execution.
o Burst Time: Time required by a process for CPU execution.
o Turn Around Time: Completion Time – Arrival Time
o Waiting Time (W.T): Turnaround Time – Burst Time
6
byjusexamprep.com
- Poor response time
- Unimplementable, because it is impossible to predict burst time of processes in advance.
- Used as Benchmark to measure performance of all other processes.
- High Throughput.
o Shortest Remaining Job first (SRTF)-
- It first executes jobs who have the smallest remaining burst time.
- Pre-emptive in nature.
- Jobs with large burst time may starve the CPU.
- Poor response time, but better than SJF.
- High throughput.
- Unimplementable, because it is impossible to predict burst time of processes in advance.
• Round Robin
o Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned with
limits called time quantum.
o Always Pre-emptive in nature.
o Best response time.
o Ready queue is treated as a circular queue.
o Mostly used for a time-sharing system.
o With large time quantum, it acts as FIFO
o No priority.
o No starvation.
• Priority Scheduling
o Every process assigned with a number, and these numbers will be the priority of that process.
o Processes with lower priority may starve for CPU.
o No idea of response time
o Mostly used in Real-time operating systems.
o Can be preemptive and non-preemptive both in nature.
7
byjusexamprep.com
o A Response Ratio is calculated for each of the available jobs and the Job with the highest
response ratio is given priority over the others.
o Response Time= (W+S)/S Where, W → Waiting Time S → Service Time or Burst Time
o Most optimal scheduling algorithm.
o Non-preemptive in nature.
o No starvation.
• Deadlock- A Deadlock is a situation where each of the computer processes waits for a resource
which is being assigned to some other process. In this situation, none of the process gets executed
since the resource it needs is held by some other process which is also waiting for some other
resource to be released.
Minimum two processes are required for deadlock to occur.
8
byjusexamprep.com
- No sharing
- No waiting
- Preempt resources
- Allocate all resources at the beginning.
- Every process use the same order for accessing resource
o To violate any of the four conditions
- Mutual exclusion: Cannot be violated as this condition is hardware dependent
- Hold & Wait: Conservative approach; Do not hold; Wait timeout.
- Non-Preemption: To violate forcefully preemption.
- Circular-wait: Give unique numbers to each process either in increasing or decreasing
order. If a process requires less no. of resource, it must release all higher no. of resource
& request for all required no. again.
• Deadlock Avoidance: Avoid occurrence of deadlock in the system by various methods. Checks
whether the system is in a safe state or not.
o A system is said to be safe if and only if the needs of all the processes could be satisfied with
the available resources in some order. The order is known as a safe sequence.
o There may exist multiple safe sequences.
o Single instance of resources using resource allocation graph.
o Banker’s Algorithm or Safety Algorithm is used to allocate multiple instances of resources to
the processes.
o Need= Max- Allocation,
o r>= p (n-1) +1; where, r = number of available resources (deadlock free), n= maximum
number of resources needed by each process, p = number of processes.
• Deadlock detection and Recovery: Do not check for any safety and whenever any process
requests some resources then allocate those resources immediately. If a deadlock occurs then
identify it and recover it by suitable methods.
o Deadlock detection involves:
- Scan the RAG (Resource allocation Graph)
- Detect a cycle in a graph.
- Recovery from deadlock.
- When we activated the detection algorithm, CPU utilization has decreased or no CPU
utilization. Most of the processes are blocked.
9
byjusexamprep.com
o Deadlock recovery involves:
- Abort all deadlocked processes.
- Abort one process at a time and check, repeat this process until deadlock is removed.
- Rollback to safe state and restart from that state.
- Preempt some resources from processes and give these resources to other processes
until the deadlock cycle is broken.
• Deadlock Ignorance: This involves completely ignoring the concept of deadlock, according to
this no deadlock exists. It uses the Ostrich approach.
• Inter-process Communication-
o Independent processes- Two or more processes will execute autonomously without affecting
the output or order of execution of other processes.
- No states will be shared with other processes
- Order of scheduling doesn’t matter.
- Output of a process is independent of order of execution of other processes.
o Co-operating processes- Two or more processes are said to be co-operative if and only if they
get affected by or affect the execution of other processes.
- These processes require an IPC mechanism that will allow them to exchange information
and data like messages, shared memory, shared variables, etc.
- Shared resources, speed-up and modularity are the advantages of co-operating
processes.
• Concurrency
o Able to run multiple applications at the same time.
o Allows better resource utilization
o Better performance
o Better average response time
• Critical Section- Critical section is a small piece of code or a section in the program from where a
process accesses the shared resources concurrently during its execution.
• Race Condition- The final output produced completely depends upon the execution order of
instructions of processes.
10
byjusexamprep.com
• Operations on Semaphore
o Wait operation
- Also known as down, or P operation.
- It decreases semaphore’s value by 1.
- It is used in the entry part of the critical section.
- If a resource is not free, then block the process.
o Signal operation
- Also known as UP, or V operation.
- It increases semaphore’s value by 1.
- It is used in the exit part of the critical section to release the acquired lock.
• Types of semaphore-
o Binary semaphore- Uses limited number, only 0 and 1.
- It is busy waiting, known as the spin lock issue.
- It can work on a single instance of a resource.
- It’s wait operation should complete only for a single process.
o Counting Semaphore- It can use any integer.
- Its negative value indicates the number of requests in a waiting queue.
- Its positive value indicates the number of instances of resource R currently available for
allocation, and there is no request in the waiting queue.
- While using multiple semaphores, following proper ordering is highly important.
o Strong semaphore- It is that semaphore whose definition includes policy of FIFO queue.
o Weak semaphore- A semaphore that doesn’t specify the order in which processes are removed
from the queue.
11
byjusexamprep.com
o Producer Consumer problem
o Dining Philosophers Problem
o Reader’s writer’s Problem
• Synchronization Mechanism
o Lock variable
- Software mechanism implemented in user mode.
- Solution to busy waiting.
- Completely failed mechanism because it cannot satisfy basic criteria of synchronization,
i.e., mutual exclusion.
o Test and Set Lock
- Uses test & Set instruction to maintain synchronization between processes executing
concurrently.
- Ensures mutual exclusion.
- Deadlock free.
- Does not guarantee bounded waiting.
- Some processes may suffer from starvation.
- Suffers from spin lock.
- It is not neutral architecturally because it needs the OS to support test-and-set
instruction.
- It is a solution to busy waiting.
o Turn variable
- It uses a variable called turn to provide the synchronization among processes.
- Ensures mutual exclusion.
- Follows a strict alternation approach, so progress is not guaranteed.
- Ensures bounded waiting because processes execute one by one and there is a
guarantee that each process will get a chance to execute.
- Starvation doesn’t exist.
- It is neutral architecturally because it doesn’t need any support of the operating system.
- Deadlock free.
- It is a solution to busy waiting.
o Strict Alternation Approach
- Processes have to enter the critical section compulsorily in an alternate order whether
they want to enter or not.
- Ensures mutual exclusion.
- Doesn’t follow the strict alternation approach.
- Ensures progress.
- It is neutral architecturally.
- It is a solution to busy waiting.
- Suffers from deadlock.
12
byjusexamprep.com
- No guarantee of bounded waiting.
o Peterson Solution
- It is a synchronization mechanism implemented in user mode, it uses two variables:
turn and interest.
- Satisfy mutual exclusion.
- Satisfy progress.
- Bounded waiting is not guaranteed.
• Spin Lock and Busy Waiting- Spinlock is a lock that causes a thread trying to acquire it to simply
wait in a loop while repeatedly checking if the lock is available. Since the thread remains active
but it is not performing a useful task, the use of such a lock is kind of Busy waiting.
• Disk - In computers the secondary storage space is typically formed of stacked up magnetic disks
one on the top of another. One such single unit of disk is known as a platter.
Data -> Sector -> Track -> Surface -> Platter -> Disk
o Transfer rate: Transfer rate is defined as the rate at which data is transferred from disk to
the computer.
Transfer rate = Number of heads × Capacity of one track × Number of rotations per second
o Random access time: Random access time is the sum of the seek time and rotational latency.
o Seek Time: The time taken by the arm to locate the desired track.
o Rotational Latency: The time taken by the arm to locate the required sector in the track.
o Transfer time: Time taken to transfer data.
o Queue time: In case, if a buffer is full of requests & more requests are coming, so we send
coming requests to a queue, and the time requests wait in the queue know as Queue time.
13
byjusexamprep.com
o Disk Access Time = Seek time + Rotational Latency + Transfer time + Controller time +
Queue time
• Disk Scheduling- Disk scheduling is a mechanism used by the operating system to schedule
multiple disk access requests.
• SCAN algorithm- It scans all the cylinders in the disk block back and forth.
o Also known as the elevator algorithm.
o No starvation.
o Better than FCFS.
o Complex to implement
o Easy to understand
o Gives low variance in waiting time and response time.
o The head has to move till the end of the disk even if there are no disk requests to be serviced.
This leads to unnecessary head movement and results in increased total seek time.
• LOOK algorithm- LOOK Algorithm is another improved version of the SCAN Disk Scheduling
Algorithm. This algorithm is based on the concept that the head pointer starts from the first disk
request at one end and moves towards the last request at the other end of the disk servicing all
the requests in between.
o No starvation
o Better performance than SCAN algorithm.
o Gives low variance in waiting time and response time.
o Cylinders just visited by the head must wait for a long time.
14
byjusexamprep.com
• C-SCAN (Circular- SCAN) algorithm- Circular-SCAN algorithm is an improved version of the SCAN
Disk Scheduling Algorithm. This algorithm is based on the concept that the head starts from
one end and moves towards the other end of the disk while servicing all the requests in between.
Once the head reaches the other end of the disk, its direction reverses.
o Better response time
o Uniform waiting time
o More seek movements as compared to the SCAN algorithm
• C-LOOK (Circular-LOOK) algorithm- C-LOOK algorithm based on the concept that the head starts
from the first request at one end and moves towards the last request at the other end of the disk
while servicing all the requests in between. After reaching the last request at the other end, the
head reverses its direction. Then the head pointer returns to the first request at the starting end
of the disk without servicing any request in between. The same process continues until all the
processes are serviced.
o Reduces waiting time.
o Have better performance than LOOK Algorithm.
o No starvation exists.
o It provides low variance in waiting time and response time.
o There is an extra overhead of determining the end requests.
• Memory Management
o Dynamic Loading- It refers to loading the library into the memory during load or run time. All
routines are kept in “relocatable format” and a routine is not loaded until it isn’t called.
o Dynamic Linking- It refers to the linking that is done during load and runtime and not when
the .exe file is created. Dynamic linking needs the operating system’s support to perform its
operation. Library which links dynamically is known as Dynamic Link Library (DLL).
o Static Linking- All modules of the program are stored in memory before they are called.
15
byjusexamprep.com
• Partitions- Memory of a computer system is divided into some small sections, so that the program
can use them.
o Functions can be used for partitions:
- Allocation
- Deallocation
- Protection (by using limit registers)
- Free space management
• Goals of Partition
o Minimum memory wastage (fragmentation).
o Executing larger programs in smaller memory area
• Contiguous Memory Allocation- we have two ways which support the above goals of partitions,
those ways are overlays, and virtual memory. This will increase the degree of multiprogramming
which will eventually increase the CPU utilization and throughput.
• Overlays-
o It is a very old memory management technique that focus on a single process information
retained in RAM at a time.
o It is best suited for sequential processing, but not for multitasking and multiprogramming
environments.
o Overlaying is possible when the programs are divided into modules and each module is
independent.
• Buddy System-
o The ‘buddy system’ allocates memory from a fixed size segment consisting of physically
contiguous pages.
o Memory is allocated from this segment using a power-of-2-allocator, which satisfies the
requests in units sized as a power of 2.
o Advantage of the buddy system is that how fast adjacent buddies can be combined to form
larger segments using a technique called coalescing.
• Allocation Policies
o First Fit- Allocate first available free space.
16
byjusexamprep.com
o Best Fit- Allocate best available space.
o Worst Fit- Allocate largest available space.
o Next Fit- Similar to first fit, but it starts from last allocated processes.
• Internal Fragmentation- In fixed size partitioning if process size is smaller than partition size
unused free space is left within partition, and these are known holes/ memory fragments/ unused
free space/ waste memory and their sum is known as Internal fragmentation.
• External fragmentation- In variable size partitioning initially processes are contiguous but as
processes leave their space these partitions may not suit the new processes for allocation. Those
unused partitions are known as External fragmentation.
o Solutions of external fragmentation: -
- Compaction
- Non-contiguous memory allocation
• Compaction: The phenomenon when free memory chunks are collected together and it results in
larger memory chunks which can be used as space available for processes, this mechanism is
referred to as Compaction. In memory management, swapping creates multiple fragments in the
memory because of the processes moving in and out. Compaction is simply a mechanism of
combining all the empty spaces together to form large free space so that we can use that free
space for processes.
o It increases CPU overhead.
o Program needs dynamic allocation.
• Non-Contiguous Allocation- Size of Logical Address Space (LAS) generated by CPU is greater
than or equal to size of physical address space (PAS) generated by Main memory. OS that there
will be efficient use of multiprogramming, can run larger programs in smaller memory areas.
• Paging: It is a memory management technique that fulfills lack of contiguous memory allocation
of physical memory.
17
byjusexamprep.com
o Page offset depends on number of page size
o Page number dependents on number of pages.
• Page table-
o Page Table may be defined as a data structure used to store the mapping between logical
addresses and physical addresses in the virtual memory system.
o Logical addresses are produced by the CPU for the page of the processes that’s why logical
addresses are used by the processes generally.
o Physical addresses are the actual frame address which is present in the main memory.
Hardware or more specifically by RAM subsystems make use of physical addresses generally.
o Main memory stores the Page table.
o Number of pages of a process is equal to the number of Page table entries.
o Page Table Base Register (PTBR) stores the page table's base address.
o There is an independent page table for each process.
18
byjusexamprep.com
- Present/ Absent (valid/ invalid) bit is used to specify whether that page is present in
the primary memory or not.
- Protection (read/write) bit specifies the permission to perform r/w operation on a page.
- Reference bit specifies whether the page has been referenced in the latest clock cycle
or not.
- Caching is used to enable or disable the caching of a page.
- Dirty bit is used to specify whether that page is modified or not.
• Drawbacks of Paging-
o For a larger process the size of the Page table will be very large, and it will waste main memory.
o While reading a single word from the main memory CPU will take more time.
o How to decrease the page table size- The page table size can be reduced by increasing the
page size, but this will cause internal fragmentation and page wastage too.
o Another method is to use multilevel paging, but this will increase the effective access time, so
this is not a practical method.
19
byjusexamprep.com
o How to decrease the effective access time- CPU can use a register with a page table stored in
it so that the time to access page tables can become less but the register is costly and very
small as compared to the size of the page table. Therefore, this is also not a practical method.
o To overcome these drawbacks of paging, we need a memory that is cheaper than the register
in cost and faster than the main memory to reduce the time taken by the CPU to access the
page table again and again, and that will only focus on accessing the actual word.
• Performance of Paging-
o Without TLB
- Main memory access time = m
- Effective memory access time (EMAT) = 2m
o With TLB
- TLB access time = c (where, c <<<<< m)
- TLB hit ratio is a situation when the desired entry is found in TLB. If a hit happens then
the CPU can simply access the actual address from the main memory. TLB hit ratio =
Number of hits/ Total references
o Effective memory access time (EMAT) = x(c+m) + (1-x) (c+2m) Where,
- x → TLB hit,
- c → time taken to access TLB,
- m → time taken to access the main memory
- k will be taken as 1, if single level paging has been used.
o TLB miss- If the entry is not in TLB then it will be said to be a TLB miss, in this situation then
the CPU has to access the page table from the main memory and then it will access the actual
frame from the main memory.
• Virtual memory- Virtual Memory is a memory space where we can store large programs in the
form of pages while their execution and only important pages or portions of processes are loaded
into the main memory. It is a very useful technique as large virtual memory is provided for user
programs even if the user's system has a very small physical memory.
o The degree of Multiprogramming will be increased.
o We can run large programs in the system, as virtual space is huge as compared to physical
memory.
o No need to purchase more memory RAMs.
o More physical memory available, as programs are stored on virtual memory, so they occupy
very less space on actual physical memory.
20
byjusexamprep.com
o Less I/O required, leads to faster and easy swapping of processes.
o The system becomes slower as page swapping takes time.
o Switching between different applications requires more time.
o Hard disk space will be shrinked and users cannot use it properly.
o Virtual memory is implemented by demand paging
• Demand Paging - This says keep all pages of a frame in the secondary memory space until they
are required. Or, In other words, demand paging says that do not load any page in the main
memory unless and until it is required. Whenever any page is referenced in the main memory for
the first time, then we can find that page in the secondary memory.
Paging Segmentation
Paging divides programs into fixed size Segmentation divides programs into
pages. variable size segments.
21
byjusexamprep.com
Paging Segmentation
Logical address is divided into page number Logical address is divided into segment
and page offset number and segment offset
Page table is used to maintain the page Segment Table maintains the segment
information. information
Page table entry has the frame number and Segment table entry has the base address
some flag bits to represent details about of the segment and some protection bits
pages. for the segments.
• Page replacement - Page replacement is defined as a process of swapping out an existing page
from the main memory frame and replacing it with the desired page. Page replacement is needed
when, All the page frames of the main memory are already occupied. So, a page must be replaced
to create a space for the desired page. Page replacement algorithms help to select which page
should be swapped out of the main memory to create a space for the incoming required page.
22
byjusexamprep.com
- This algorithm is implemented by keeping track of all the pages in a stack.
o LRU Page Replacement Algorithm
- This algorithm works on the concept of “Least Recently Used “.
- This algorithm swaps out the page that has not been referred by the CPU for the longest
time.
o Optimal Page Replacement Algorithm
- Optimal Page Replacement algorithm swaps out the page that will not be referred by
the CPU in future for the longest time span.
- This algorithm cannot be implemented practically, because it is impossible to predict
which pages will be used in the future
- However, it is the best-known algorithm and a benchmark for other page replacement
algorithms and this algorithm gives the least number of page misses.
o Random Page Replacement Algorithm
- This algorithm randomly swaps out any page.
- So, this algorithm can behave like any other page replacement algorithm like FIFO,
LIFO, LRU, Optimal etc.
• Thrashing- If page fault and swapping of pages from memory happens very frequently at a higher
rate, then the OS has to spend more time on swapping these pages. This state is known as
thrashing. Due to this, CPU utilization will be reduced.
o Thrashing occurs when memory is over committed, and pages are tossed out while still needed.
CPU utilization is very low when the page fault rate is high.
• Inverted Paging- Instead of using different or unique page tables, inverted paging says use a
global page table. But it increases search time that’s why this concept is not used. This technique
can decrease load on main memory.
• File System
o File system structure- A file system is used to allow the data to be stored, located, and retrieved
easily. The file system itself is generally composed of many different levels, which are:
- Application Programs
- Logical File system
23
byjusexamprep.com
- File organization module
- Basic File System
- I/O Control
- Devices
o Ways to access a file-
- Sequential access- Bytes are accessed in order.
- Random access- Bytes are accessed in any order.
o A file system contains following major components -
- Disk management organizes disk blocks into files.
- Naming provides file names and directories to users, instead of track and sector
numbers.
- Protection keeps information secure from other users.
- Reliability protects information loss due to system failure.
• Directory Structure
o Single-level directory
o Two-level directory structure
o Tree structure directories
o Acyclic graph directories
o General graph directory
• File- A file is logically related records. From a programmer's perspective file is a data structure
that has representation, definition, structure, operation, and attributes.
o Operations on File: Create, Open, read, write, modify, seek, copy, move, close, rename,
delete, etc.
o Attributes of Files: name of file, size, owner, path, location, r/w information, last modification
date, file creation date, etc.
• Directory- Directory is also a file which contains information about other files. It is a metadata of
files.
• File Control Block (FCB)- It contains information about the file, including ownership, permission,
location, etc.
24
byjusexamprep.com
o UNIX: UNIX file system (I node block or Indexed-node block)
o Windows: FAT, FAT 32, WTFS
o Linux: Extended File System
25
byjusexamprep.com
- Relatively simple and efficient to find the first free blocks or consecutive free blocks.
- To determine the first non-zero value, block number is required:
Block number = offset of first 1 bit + (Number of bits per word ✕ Number of zero-value words)
o Linked List - With linked allocation, use existing links to form a free list. Link together all the
free disk blocks. Space not wasted but It can be difficult to allocate contiguous blocks. List
traversal is inefficient.
o Grouping- In the first free block we store the addresses of n free blocks. The first (n-1) of
these blocks is free. The last block (nth) consists of addresses of another n blocks and so on.
Addresses of so many free blocks can be found and accessed quickly.
o Counting- The Free-Space list can contain pairs (block number, count). Keep the address of
the first free block number and the number n of free contiguous blocks that follow. Several
contiguous blocks may be allocated or freed.
****
26