OS in One Video
OS in One Video
2. Multithreading:
Multithreading involves executing multiple threads within a single process. A
thread is a lightweight unit of execution that can run concurrently with other
threads within the same process. Threads share the same memory space
and resources, such as file handles and network connections. Multithreading
allows for parallel execution within a process, enabling better utilization of
system resources and potentially improving performance by dividing tasks
into smaller units of work that can be executed concurrently.
3. Multiprogramming:
Multiprogramming is a technique where multiple programs are loaded into
4. Multitasking:
Multitasking is a technique that allows multiple tasks or processes to run
concurrently on a single CPU. The CPU time is divided among the tasks,
giving the illusion of parallel execution. The operating system switches
between tasks rapidly, giving each task a time slice or quantum to execute.
Multitasking is commonly used in modern operating systems to provide
responsiveness and the ability to run multiple applications simultaneously.
Process:
A process is an instance of a program in execution. When a program is
loaded into memory and executed, it becomes a process. A process is an
independent entity with its own memory space, resources, and execution
context. It has its own program counter, stack, and variables. Processes are
managed by the operating system, and each process runs in its own
protected memory space. Processes can be concurrent and communicate
with each other through inter-process communication mechanisms.
Thread:
A thread is a unit of execution within a process. It represents a sequence of
instructions that can be scheduled and executed independently. Threads
share the same memory space and resources within a process. Multiple
threads within a process can run concurrently, allowing for parallel execution
of tasks. Threads within the same process can communicate and share data
more easily compared to inter-process communication. However, each
thread has its own program counter and stack.
4. Priority Scheduling:
Priority scheduling assigns a priority value to each process, and the CPU is
allocated to the process with the highest priority. It can be either preemptive
or non-preemptive. In preemptive priority scheduling, if a higher-priority
process arrives, the currently running process may be preempted. In non-
preemptive priority scheduling, the process continues executing until it
completes or voluntarily gives up the CPU. Priority scheduling can suffer from
starvation if a lower-priority process never gets a chance to execute.
7. Process synchronisation
Process synchronization is like a traffic signal that helps regulate the flow of
vehicles at an intersection. In the context of computing, it refers to techniques
and mechanisms used to coordinate the execution of processes or threads so
that they can work together harmoniously.
Imagine multiple processes or threads working on different tasks simultaneously.
Process synchronization ensures that they cooperate and communicate
effectively to avoid conflicts and ensure proper order of execution. It helps
prevent issues like race conditions, data inconsistencies, or deadlocks that can
arise when multiple processes or threads access shared resources
simultaneously.
Here are the key requirements of synchronization mechanisms:
9. Deadlock
A Deadlock is a situation where
each of the computer processes
waits for a resource that is being
assigned to another process. In
this situation, none of the
process gets executed since the
resource it needs, is held by
some other process that is also
waiting for some other resource
to be released.
Mutual Exclusion
A resource can only be shared in a mutually exclusive manner. It implies that
No preemption
The process once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that
the last process is waiting for the resource which is being held by the first
process.
1. Fixed Partitioning:
In fixed partitioning, memory is divided into fixed-sized partitions or blocks,
and each partition is assigned to a specific process or task. The system
allocates a predetermined amount of memory to each partition, which
remains fixed throughout the execution.
2. Dynamic Partitioning:
Dynamic partitioning, also known as variable partitioning, addresses the
limitation of fixed partitioning by allowing memory to be allocated and
deallocated dynamically based on the size requirements of processes.
Dynamic partitioning provides better memory utilization compared to fixed
partitioning, as memory can be allocated based on actual requirements.
However, managing fragmentation and efficiently allocating and deallocating
memory can be more complex.
2. Best Fit: The best-fit algorithm searches for the smallest available memory
block that is large enough to accommodate the process. It aims to minimize
leftover fragments by choosing the most optimal block. This algorithm can
lead to better overall memory utilization, but it may involve more time-
consuming searches.
3. Worst Fit: The worst-fit algorithm allocates the largest available memory
block to the process. This approach intentionally keeps larger fragments to
accommodate potential future larger processes. While it may seem
counterintuitive, it can help reduce fragmentation caused by small processes
and improve overall memory utilization.
13. Paging
In Operating Systems, Paging is a storage mechanism used to retrieve
processes from the secondary storage into the main memory in the form of
pages.
The main idea behind the paging is to divide each process in the form of pages.
The main memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The
pages can be stored at the different locations of the memory but the priority is
always to find the contiguous frames or holes.
Pages of the process are brought into the main memory only when they are
required otherwise they reside in the secondary storage.
Instead of loading one big process in the main memory, the Operating System
loads the different parts of more than one process in the main memory.
In this algorithm, the page will be replaced which is least recently used.
LRU works on the principle that pages that have been recently accessed are
more likely to be accessed again in the near future. By replacing the least
recently used pages, it aims to retain the frequently accessed pages in
memory, reducing the number of page faults and improving overall system
performance.
16. Thrashing
Thrashing refers to a situation in computer systems where the system spends a
significant amount of time and resources continuously swapping pages between
physical memory (RAM) and secondary storage (such as a hard disk) due to
excessive paging activity. In thrashing, the system is busy moving pages in and
out of memory rather than executing useful work, leading to severe degradation
in performance.
Seek Time
Seek time is the time taken in locating the disk arm to a specified track where the
read/write request will be satisfied.
Rotational Latency
It is the time taken by the desired sector to rotate itself to the position from where
it can access the R/W heads.
2. SSTF (Shortest Seek Time First): This algorithm selects the request with
the shortest seek time from the current position of the disk head. It minimizes
the average seek time and reduces the overall disk access time. However, it
may lead to starvation of requests located farther away from the current
position.
3. SCAN: Also known as the elevator algorithm, SCAN moves the disk head in
one direction (e.g., from the outermost track to the innermost or vice versa)
and services requests along the way. Once it reaches the end, it changes
direction and continues the same process. This algorithm provides a fair
distribution of service and prevents starvation, but it may result in longer
response times for requests at the far ends of the disk.
4. C-SCAN (Circular SCAN): Similar to SCAN, C-SCAN moves the disk head
in one direction, but instead of reversing direction, it jumps to the other end of
the disk and starts again. This ensures a more consistent response time for
all requests, but it may cause delays for requests that arrive after the head
has passed their location.
5. LOOK: LOOK is a variant of SCAN that only goes as far as the last request
in its current direction. Once there are no more requests in that direction, it
reverses direction. This reduces unnecessary traversal of the entire disk and
improves response times for requests.