[go: up one dir, main page]

0% found this document useful (0 votes)
69 views33 pages

Operating System

The document discusses different types of operating systems and memory components in a computer system. It describes batch operating systems, time-sharing operating systems, and distributed operating systems. It also defines RAM and ROM, describing their usage, volatility, accessibility, read/write capabilities, speed, cost, and other key differences. SRAM and DRAM are volatile types of RAM that are compared in terms of their components and functionality.

Uploaded by

Preeti Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views33 pages

Operating System

The document discusses different types of operating systems and memory components in a computer system. It describes batch operating systems, time-sharing operating systems, and distributed operating systems. It also defines RAM and ROM, describing their usage, volatility, accessibility, read/write capabilities, speed, cost, and other key differences. SRAM and DRAM are volatile types of RAM that are compared in terms of their components and functionality.

Uploaded by

Preeti Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

OPERATING SYSTEM

An Operating System performs all the basic tasks like managing files, processes, and memory.
Thus operating system acts as the manager of all the resources, i.e. resource manager. Thus, the operating
system becomes an interface between user and machine. 

Types of Operating Systems: Some widely used operating systems are as follows- 


1. Batch Operating System – 
This type of operating system does not interact with the computer directly. There is an operator which takes
similar jobs having the same requirement and group them into batches. It is the responsibility of the operator to
sort jobs with similar needs. 
Example-: payroll and bank statements

ADVANTAGES
 Multiple users can share the batch systems
 The idle time for the batch system is very less
 It is easy to manage large work repeatedly in batch systems

Disadvantages
 The computer operators should be well known with batch systems
 Batch systems are hard to debug
 It is sometimes costly
 The other jobs will have to wait for an unknown time if any job fails

2. Time-Sharing Operating Systems – 


Each task is given some time to execute so that all the tasks work smoothly. Each user gets the time of
CPU as they use a single system. These systems are also known as Multitasking Systems. The task
can be from a single user or different users also. The time that each task gets to execute is called
quantum. After this time interval is over OS switches over to the next task.  Eg; Multix, Unix
Advantages of Time-Sharing OS:  
 Each task gets an equal opportunity
 Fewer chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:  
 Reliability problem
 One must have to take care of the security and integrity of user programs and data
 Data communication problem

Distributed Operating System – 


These types of the operating system is a recent advancement in the world of computer technology and
are being widely accepted all over the world and, that too, with a great pace. Various autonomous
interconnected computers communicate with each other using a shared communication network.
Independent systems possess their own memory unit and CPU. These are referred to as loosely
coupled systems or distributed systems. These system’s processors differ in size and function. The
major benefit of working with these types of the operating system is that it is always possible that one
user can access the files or software which are not actually present on his system but some other
system connected within this network i.e., remote access is enabled within the devices connected in
that network. 
RAM-:
The full form of RAM is Random Access Memory. The information stored in this type of memory
is lost when the power supply to the PC or laptop is switched off. The information stored in RAM
can be checked with the help of BIOS. It is generally known as the main memory, or temporary
memory or cache memory or volatile memory of the computer system.
TYPES OF RAM

 DRAM -Dynamic RAM must be continuously refreshed, or otherwise, all contents are
lost.
 SRAM – Static RAM is faster, needs less power but is more expensive. However, it does
need to be refreshed like DRAM.
 Synchronous Dynamic RAM (SDRAM) – This type of RAM can run at very high clock
speeds.
 DDR – Double Data Rate provide synchronous Random Access Memory

ROM:
The full form of ROM is a Read-Only Memory. It is a permanent type of memory. Its content are
not lost when the power supply is switched off. The computer manufacturer decides the
information of ROM, and it is permanently stored at the time of manufacturing, which cannot be
overwritten by the user.
TYPES OF ROM-:

 EPROM: The full form of EPROM is Erasable Programmable Read-only memory. It


stores instructions, but you can erase only by exposing the memory to ultraviolet light.
 PROM: The full form of PROM is Programmable Read-Only memory. This type of ROM
is written or programmed using a particular device.
 EEPROM stands for electrically Erasable Programmable Read-Only Memory. It stores
and deletes instructions on a special circuit.
 Mask ROM is a full form of MROM is a type of read-only memory (ROM) whose
contents can be programmed only by an integrated circuit manufacturer.

Parameters RAM ROM


RAM allows the computer to
ROM stores all the application which is needed to boot
Usage read data quickly to run
the computer initially. It only allows for reading.
applications.
RAM is volatile. So, its contents
It is non-volatile, i.e., its contents are retained even if
Volatility are lost when the device is
the device is powered off
powered off.
The processor can’t directly access the information that
Information stored in the RAM is is stored in the ROM. In order to access ROM
Accessibility
easily accessed. information first, the information is transferred into the
RAM, and then it can be executed by the processor.
Both R (read) and W (write)
operations can be performed The ROM memory allows the user to read the
Read/Write
over the information which is information. But, the user can’t alter the information.
stored in the RAM.
Storage RAM is used to store temporary ROM memory is used to store
Parameters RAM ROM
information. permanent information, which is non-erasable.
The access speed of RAM is Its speed is slower in comparison with RAM. Therefore,
Speed
faster. ROM can’t boost up the processor speed.
Cost The price of RAM is quite high. The price of ROM is comparatively low.
Physical size of RAM chip is Physical size of ROM chip is smaller than the RAM chip
Chip size
bigger than ROM chip. of same storage capacity.
Preservation Electricity is needed in RAM to Electricity is not required to flow and preserving
of Data flow and to preserve information information
The RAM chip is in rectangle Read-only memory (ROM) is a type of storage medium
Structure form and is inserted over the that permanently stores data on personal computers
motherboard of the computer. (PCs) and other electronic devices.
Parameters RAM ROM
RAM allows the computer to
ROM stores all the application which is needed to boot
Usage read data quickly to run
the computer initially. It only allows for reading.
applications.
RAM is volatile. So, its contents
It is non-volatile, i.e., its contents are retained even if
Volatility are lost when the device is
the device is powered off
powered off.
The processor can’t directly access the information that
Information stored in the RAM is is stored in the ROM. In order to access ROM
Accessibility
easily accessed. information first, the information is transferred into the
RAM, and then it can be executed by the processor.
Both R (read) and W (write)
operations can be performed The ROM memory allows the user to read the
Read/Write
over the information which is information. But, the user can’t alter the information.
stored in the RAM.
RAM is used to store temporary ROM memory is used to store
Storage
information. permanent information, which is non-erasable.
The access speed of RAM is Its speed is slower in comparison with RAM. Therefore,
Speed
faster. ROM can’t boost up the processor speed.
Cost The price of RAM is quite high. The price of ROM is comparatively low.
Physical size of RAM chip is Physical size of ROM chip is smaller than the RAM chip
Chip size
bigger than ROM chip. of same storage capacity.
Preservation Electricity is needed in RAM to Electricity is not required to flow and preserving
of Data flow and to preserve information information
The RAM chip is in rectangle Read-only memory (ROM) is a type of storage medium
Structure form and is inserted over the that permanently stores data on personal computers
motherboard of the computer. (PCs) and other electronic devices.

SRAM VS DRAM-:
What is SRAM?
SRAM is a type of semiconductor memory that uses Bistable latching circuitry to store each bit.
In this type of RAM, data is stored using the six transistor memory cell. Static RAM is mostly
used as a cache memory for the processor (CPU).
SRAM is relatively faster than other RAM types, such as DRAM. It also consumes less power.
The full form of SRAM is Static Random Access Memory.

What is DRAM?

It is a type of RAM which allows you to stores each bit of data in a separate capacitor within a
particular integrated circuit.It is a standard computer memory of any modern desktop computer.
The full form of DRAM is Dynamic Random Access Memory.DRAM is constructed using
capacitors and a few transistors. In this type of RAM, the capacitor is used for storing the data
where bit value, which signifies that the capacitor is charged and a bit value 0, which means that
the capacitor is discharged.

SRAM DRAM
SRAM has lower access time, which is faster compared DRAM has a higher access time. It is slower
to DRAM. than SRAM.
SRAM is costlier than DRAM. DRAM cost is lesser compared to SRAM.
SRAM needs a constant power supply, but it consumes DRAM requires more power consumption as
less power. the information is stored in the capacitor.
SRAM offers low packaging density. DRAM offers a high packaging density.
Uses transistors and latches. Uses capacitors and very few transistors.
L2 and L3 CPU cache units are some general The DRAM is mostly found as the main
application of an SRAM. memory in computers.
The storage capacity of DRAM is 1 GB to
The storage capacity of SRAM is 1MB to 16MB.
16GB.
DRAM has the characteristics of off-chip
SRAM is in the form of on-chip memory.
memory.
The SRAM is widely used on the processor or lodged
between the main memory and processor of your The DRAM is placed on the motherboard.
computer.
SRAM is of a smaller size. DRAM is available in larger storage capacity.
This type of RAM works on the principle of changing the This type of RAM works on holding the
direction of current through switches. charges.

PROM EPROM EEPROM


The main difference between PROM EPROM and EEPROM is that PROM is programmable only once
while EPROM is reprogrammable using ultraviolet light and EPROM is reprogrammable using an
electric charge.
VISUALISATION CONTAINERIZATION
Virtualization is the technology which can simulate your physical hardware (such as CPU cores, memory, disk)
and represent it as seperate machine. It has its own Guest OS, Kernel, process, drivers and  etc. Therefore, it is
hardware level virtualization. Most common technology is VMware and VirtualBox
Containerization is os-level virtualization. It doesn't simulate the entire physical machine. It just simulate the OS of
your machine. Therefore multiple applications can share the same OS kernel. Container play similar roles to virtual
machine but without hardware virtualization.  Most common container technology is Docker
Sr. Key Virtualization Containerization
No.

1 Basic Virtualization is the technology which Containerization is os-level


can simulate your physical hardware virtualization. It doesn't
(such as CPU cores, memory, disk) simulate the entire physical
and represent it as seperate machine machine

2 Detaching It used  Hypervisor to detach the It used docker engine in case


Layer physical machine Docker

3 Isolation It has hardware level isolation so fit is It has process level isolation
Level fully secured

4. LightWeigh It is heavyweight It is very lightweight


t

5. Portable It is not portable It is very portable. We can


build, ship and run anywhere

UEFI and BIOS

Both UEFI and BIOS are low-level software that starts when you boot your PC before
booting your operating system, but UEFI is a more modern solution, supporting larger hard
drives, faster boot times, more security features, and—conveniently—graphics and mouse
cursors.

BIOS is short for Basic Input-Output system. It’s low-level software that resides in a chip on
your computer’s motherboard. The BIOS loads when your computer starts up, and the
BIOS is responsible for waking up your computer’s hardware components, ensures they’re
functioning properly, and then runs the bootloader that boots Windows or whatever other
operating system you have installed. The BIOS goes through a POST, or Power-On Self Test,
before booting your operating system. It checks to ensure your hardware configuration is
valid and working properly. If something is wrong, you’ll see an error message or hear a
cryptic series of beep codes. You’ll have to look up what different sequences of beeps
mean in the computer’s manual.

When your computer boots—and after the POST finishes—the BIOS looks for a Master
Boot Record, or MBR, stored on the boot device and uses it to launch the bootload The
BIOS must run in 16-bit processor mode, and only has 1 MB of space to execute in. It has
trouble initializing multiple hardware devices at once, which leads to a slower boot process
when initializing all the hardware interfaces and devices on a modern PC.

The BIOS has needed replacement for a long time


How UEFI Replaces and Improves on the BIOS

UEFI replaces the traditional BIOS on PCs. There’s no way to switch from BIOS to UEFI on an
existing PC. You need to buy new hardware that supports and includes UEFI, as most new
computers do. Most UEFI implementations provide BIOS emulation so you can choose to
install and boot old operating systems that expect a BIOS instead of UEFI, so they’re
backwards compatible.

EFI can run in 32-bit or 64-bit mode and has more addressable address space than BIOS,
which means your boot process is faster. It also means that UEFI setup screens can be
slicker than BIOS settings screens, including graphics and mouse cursor support. However,
this isn’t mandatory. Many PCs still ship with text-mode UEFI settings interfaces that look
and work like an old BIOS setup screen.
Difference between Multiprogramming, multitasking,
multithreading and multiprocessing
 Difficulty Level : Easy
 Last Updated : 16 Sep, 2019
1. Multiprogramming – A computer running more than one program at a time (like running Excel and Firefox
simultaneously).
2. Multiprocessing – A computer using more than one CPU at a time.
3. Multitasking – Tasks sharing a common resource (like 1 CPU).
4. Multithreading is an extension of multitasking.
1. Multi programming –
In a modern computing system, there are usually several concurrent application processes which want to execute.
Now it is the responsibility of the Operating System to manage all the processes effectively and efficiently.
One of the most important aspects of an Operating System is to multi program.
In a computer system, there are multiple processes waiting to be executed, i.e. they are waiting when the CPU will
be allocated to them and they begin their execution. These processes are also known as jobs. Now the main memory
is too small to accommodate all of these processes or jobs into it. Thus, these processes are initially kept in an area
called job pool. This job pool consists of all those processes awaiting allocation of main memory and CPU.
CPU selects one job out of all these waiting jobs, brings it from the job pool to main memory and starts executing it.
The processor executes one job until it is interrupted by some external factor or it goes for an I/O task.
Non-multi programmed system’s working –
 In a non multi programmed system, As soon as one job leaves the CPU and goes for some other task (say I/O ),
the CPU becomes idle. The CPU keeps waiting and waiting until this job (which was executing earlier) comes
back and resumes its execution with the CPU. So CPU remains free for all this while.
 Now it has a drawback that the CPU remains idle for a very long period of time. Also, other jobs which are
waiting to be executed might not get a chance to execute because the CPU is still allocated to the earlier job.
This poses a very serious problem that even though other jobs are ready to execute, CPU is not allocated to them
as the CPU is allocated to a job which is not even utilizing it (as it is busy in I/O tasks).
 It cannot happen that one job is using the CPU for say 1 hour while the others have been waiting in the queue for
5 hours. To avoid situations like this and come up with efficient utilization of CPU, the concept of multi
programming came up.
The main idea of multi programming is to maximize the CPU time.
Multi programmed system’s working –
 In a multi-programmed system, as soon as one job goes for an I/O task, the Operating System interrupts that job,
chooses another job from the job pool (waiting queue), gives CPU to this new job and starts its execution. The
previous job keeps doing its I/O operation while this new job does CPU bound tasks. Now say the second job
also goes for an I/O task, the CPU chooses a third job and starts executing it. As soon as a job completes its I/O
operation and comes back for CPU tasks, the CPU is allocated to it.
 In this way, no CPU time is wasted by the system waiting for the I/O task to be completed.
Therefore, the ultimate goal of multi programming is to keep the CPU busy as long as there are processes ready
to execute. This way, multiple programs can be executed on a single processor by executing a part of a program
at one time, a part of another program after this, then a part of another program and so on, hence executing
multiple programs. Hence, the CPU never remains idle.
In the image below, program A runs for some time and then goes to waiting state. In the mean time program B
begins its execution. So the CPU does not waste its resources and gives program B an opportunity to run.
Play Video
2. Multiprocessing –
In a uni-processor system, only one process executes at a time.
Multiprocessing is the use of two or more CPUs (processors) within a single Computer system. The
term also refers to the ability of a system to support more than one processor within a single computer
system. Now since there are multiple processors available, multiple processes can be executed at a
time. These multi processors share the computer bus, sometimes the clock, memory and peripheral
devices also.
Multi processing system’s working –
 With the help of multiprocessing, many processes can be executed simultaneously. Say processes
P1, P2, P3 and P4 are waiting for execution. Now in a single processor system, firstly one process
will execute, then the other, then the other and so on.
 But with multiprocessing, each process can be assigned to a different processor for its execution. If
its a dual-core processor (2 processors), two processes can be executed simultaneously and thus
will be two times faster, similarly a quad core processor will be four times as fast as a single
processor.
Why use multi processing –
 The main advantage of multiprocessor system is to get more work done in a shorter period of time.
These types of systems are used when very high speed is required to process a large volume of
data. Multi processing systems can save money in comparison to single processor systems because
the processors can share peripherals and power supplies.
 It also provides increased reliability in the sense that if one processor fails, the work does not halt, it
only slows down. e.g. if we have 10 processors and 1 fails, then the work does not halt, rather the
remaining 9 processors can share the work of the 10th processor. Thus the whole system runs only
10 percent slower, rather than failing altogether.

Multiprocessing refers to the hardware (i.e., the CPU units) rather than the software (i.e., running
processes). If the underlying hardware provides more than one processor then that is multiprocessing.
It is the ability of the system to leverage multiple processors’ computing power.
Difference between Multi programming and Multi processing –
 A System can be both multi programmed by having multiple programs running at the same time and
multiprocessing by having more than one physical processor. The difference between
multiprocessing and multi programming is that Multiprocessing is basically executing multiple
processes at the same time on multiple processors, whereas multi programming is keeping several
programs in main memory and executing them concurrently using a single CPU only.
 Multiprocessing occurs by means of parallel processing whereas Multi programming occurs by
switching from one process to other (phenomenon called as context switching).
3. Multitasking –
As the name itself suggests, multi tasking refers to execution of multiple tasks (say processes,
programs, threads etc.) at a time. In the modern operating systems, we are able to play MP3 music,
edit documents in Microsoft Word, surf the Google Chrome all simultaneously, this is accomplished by
means of multi tasking.
Multitasking is a logical extension of multi programming. The major way in which multitasking differs
from multi programming is that multi programming works solely on the concept of context switching
whereas multitasking is based on time sharing alongside the concept of context switching.
Multi tasking system’s working –
 In a time sharing system, each process is assigned some specific quantum of time for which a
process is meant to execute. Say there are 4 processes P1, P2, P3, P4 ready to execute. So each
of them are assigned some time quantum for which they will execute e.g time quantum of 5
nanoseconds (5 ns). As one process begins execution (say P2), it executes for that quantum of time
(5 ns). After 5 ns the CPU starts the execution of the other process (say P3) for the specified
quantum of time.
 Thus the CPU makes the processes to share time slices between them and execute accordingly. As
soon as time quantum of one process expires, another process begins its execution.
 Here also basically a context switch is occurring but it is occurring so fast that the user is able to
interact with each program separately while it is running. This way, the user is given the illusion that
multiple processes/ tasks are executing simultaneously. But actually only one process/ task is
executing at a particular instant of time. In multitasking, time sharing is best manifested because
each running process takes only a fair quantum of the CPU time.
In a more general sense, multitasking refers to having multiple programs, processes, tasks, threads
running at the same time. This term is used in modern operating systems when multiple tasks share a
common processing resource (e.g., CPU and Memory).
 As depicted in the above image, At any time the CPU is executing only one task while other tasks
are waiting for their turn. The illusion of parallelism is achieved when the CPU is reassigned to
another task. i.e all the three tasks A, B and C are appearing to occur simultaneously because of
time sharing.
 So for multitasking to take place, firstly there should be multiprogramming i.e. presence of multiple
programs ready for execution. And secondly the concept of time sharing.
4. Multi threading –
A thread is a basic unit of CPU utilization. Multi threading is an execution model that allows a single
process to have multiple code segments (i.e., threads) running concurrently within the “context” of that
process.
e.g. VLC media player, where one thread is used for opening the VLC media player, one thread for
playing a particular song and another thread for adding new songs to the playlist.
Multi threading is the ability of a process to manage its use by more than one user at a time and to
manage multiple requests by the same user without having to have multiple copies of the program.
Multi threading system’s working –
Example 1 –
 Say there is a web server which processes client requests. Now if it executes as a single threaded
process, then it will not be able to process multiple requests at a time. Firstly one client will make its
request and finish its execution and only then, the server will be able to process another client
request. This is really costly, time consuming and tiring task. To avoid this, multi threading can be
made use of.
 Now, whenever a new client request comes in, the web server simply creates a new thread for
processing this request and resumes its execution to hear more client requests. So the web server
has the task of listening to new client requests and creating threads for each individual request.
Each newly created thread processes one client request, thus reducing the burden on web server.
Example 2 –
 We can think of threads as child processes that share the parent process resources but execute
independently. Now take the case of a GUI. Say we are performing a calculation on the GUI (which
is taking very long time to finish). Now we can not interact with the rest of the GUI until this
command finishes its execution. To be able to interact with the rest of the GUI, this command of
calculation should be assigned to a separate thread. So at this point of time, 2 threads will be
executing i.e. one for calculation, and one for the rest of the GUI. Hence here in a single process,
we used multiple threads for multiple functionality.
The image below completely describes the VLC player example:
Advantages of Multi threading –
 Benefits of Multi threading include increased responsiveness. Since there are multiple threads in a
program, so if one thread is taking too long to execute or if it gets blocked, the rest of the threads
keep executing without any problem. Thus the whole program remains responsive to the user by
means of remaining threads.
 Another advantage of multi threading is that it is less costly. Creating brand new processes and
allocating resources is a time consuming task, but since threads share resources of the parent
process, creating threads and switching between them is comparatively easy. Hence multi threading
is the need of modern Operating Systems.
Monolithic kernel is a single large process running entirely in a single address space. It is a single static binary file. All
kernel services exist and execute in the kernel address space. The kernel can invoke functions directly. Examples of
monolithic kernel based OSs: Unix, Linux.

In microkernels, the kernel is broken down into separate processes, known as servers. Some of the servers run in
kernel space and some run in user-space. All servers are kept separate and run in different address spaces. Servers
invoke "services" from each other by sending messages via IPC (Interprocess Communication). This separation has the
advantage that if one server fails, other servers can still work efficiently. Examples of microkernel based OSs: Mac OS X
and Windows NT.

What happens when we turn on computer?


 Difficulty Level : Medium
 Last Updated : 04 May, 2022
A computer without a program running is just an inert hunk of electronics. The first thing a computer has to do when
it is turned on is to start up a special program called an operating system. The operating system’s job is to help other
computer programs work by handling the messy details of controlling the computer’s hardware. 

An overview of the boot process  

The boot process is something that happens every time you turn your computer on. You don’t really see it, because
it happens so fast. You press the power button and come back a few sec (or minutes if on slow storage like HDD)
later and Windows 10, or Windows 11, or whatever Operating System you use is all loaded. 
The BIOS chip tells it to look in a fixed place, usually on the lowest-numbered hard disk (the boot disk) for a special
program called a boot loader (under Linux the boot loader is called Grub or LILO). The boot loader is pulled into
memory and started. The boot loader’s job is to start the real operating system. 

Difference between Program and Process


PROGRAMM-:

When we execute a program that was just compiled, the OS will generate a process to execute the
program. Execution of the program starts via GUI mouse clicks, command line entry of its name, etc. A
program is a passive entity as it resides in the secondary memory, such as the contents of a file stored
on disk. One program can have several processes. 

Process : 
The term process (Job) refers to program code that has been loaded into a computer’s memory so that it
can be executed by the central processing unit (CPU). A process can be described as an instance of a
program running on a computer or as an entity that can be assigned to and executed on a processor. A
program becomes a process when loaded into memory and thus is an active entity. 

Sr.No. Program Process

Program contains a set of instructions


1. designed to complete a specific task. Process is an instance of an executing program.

Program is a passive entity as it resides in Process is a active entity as it is created during


2. the secondary memory. execution and loaded into the main memory.

Program exists at a single place and Process exists for a limited span of time as it gets
3. continues to exist until it is deleted. terminated after the completion of task.

4. Program is a static entity. Process is a dynamic entity.

Program does not have any resource Process has a high resource requirement, it needs
requirement, it only requires memory resources like CPU, memory address, I/O during its
5. space for storing the instructions. lifetime.

Process has its own control block called Process


6. Program does not have any control block. Control Block.

In addition to program data, a process also requires


Program has two logical components: additional information required for the management
7. code and data. and execution.

Many processes may execute a single program.


There program code may be the same but program
8. Program does not change itself. data may be different. these are never same.
States of a Process in Operating Systems

States of a process are as following: 

 New (Create) – In this step, the process is about to be created but not yet created, it is the
program which is present in secondary memory that will be picked up by OS to create the
process.

 Ready – New -> Ready to run. After the creation of a process, the process enters the ready state
i.e. the process is loaded into the main memory. The process here is ready to run and is waiting to
get the CPU time for its execution. Processes that are ready for execution by the CPU are
maintained in a queue for ready processes.

 Run – The process is chosen by CPU for execution and the instructions within the process are
executed by any one of the available CPU cores.

 Blocked or wait – Whenever the process requests access to I/O or needs input from the user or
needs access to a critical region(the lock for which is already acquired) it enters the blocked or
wait state. The process continues to wait in the main memory and does not require CPU. Once the
I/O operation is completed the process goes to the ready state.

 Terminated or completed – Process is killed as well as PCB is deleted.

 Suspend ready – Process that was initially in the ready state but was swapped out of main
memory(refer Virtual Memory topic) and placed onto external storage by scheduler is said to be
in suspend ready state. The process will transition back to ready state whenever the process is
again brought onto the main memory.

 Suspend wait or suspend blocked – Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory.
When work is finished it may go to suspend ready.

CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations then it is called
CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it is called I/O
bound process. 

Types of schedulers:

1. Long term – performance – Makes a decision about how many processes should be made to
stay in the ready state, this decides the degree of multiprogramming. Once a decision is taken it
lasts for a long time hence called long term scheduler.

2. Short term – Context switching time – Short term scheduler will decide which process to be
executed next and then it will call dispatcher. A dispatcher is a software that moves process from
ready to run and vice versa. In other words, it is context switching.
3. Medium term – Swapping time – Suspension decision is taken by medium term scheduler.
Medium term scheduler is used for swapping that is moving the process from main memory to
secondary and vice versa.

Multiprogramming – We have many processes ready to run. There are two types of multiprogramming:

1. Pre-emption – Process is forcefully removed from CPU. Pre-emption is also called as time
sharing or multitasking.

2. Non pre-emption – Processes are not removed until they complete the execution.

Degree of multiprogramming – The number of processes that can reside in the ready state at maximum
decides the degree of multiprogramming, e.g., if the degree of programming = 100, this means 100
processes can reside in the ready state at maximum

Process Table and Process Control Block (PCB)


While creating a process the operating system performs several operations. To identify the processes, it
assigns a process identification number (PID) to each process. As the operating system supports multi-
programming, it needs to keep track of all the processes. For this task, the process control block (PCB) is
used to track the process’s execution status. Each block of memory contains information about the
process state, program counter, stack pointer, status of opened files, scheduling algorithms, etc. All these
information is required and must be saved when the process is switched from one state to another. When
the process makes a transition from one state to another, the operating system must update information in
the process’s PCB.

A process control block (PCB) contains information about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCB’s, that means logically contains a PCB for all of the current
processes in the system.

 Pointer – It is a stack pointer which is required to be saved when the process is switched from
one state to another to retain the current position of the process.

 Process state – It stores the respective state of the process.

 Process number – Every process is assigned with a unique id known as process ID or PID which
stores the process identifier.

 Program counter – It stores the counter which contains the address of the next instruction that is
to be executed for the process.

 Register – These are the CPU registers which includes: accumulator, base, registers and general
purpose registers.
 Memory limits – This field contains the information about memory management system used by
operating system. This may include the page tables, segment tables etc.

 Open files list – This information includes the list of files opened for a process.

Miscellaneous accounting and status data – This field includes information about the amount of CPU
used, time constraints, jobs or process number, etc.
The process control block stores the register content also known as execution content of the processor
when it was blocked from running. This execution content architecture enables the operating system to
restore a process’s execution context when the process returns to the running state. When the process
makes a transition from one state to another, the operating system updates its information in the process’s
PCB. The operating system maintains pointers to each process’s PCB in a process table so that it can
access the PCB quickly.

How does a process look like in memory?


https://www.tutorialspoint.com/assets/questions/media/29467/1.jpg

A program loaded into memory and executing is called a process. In simple, a process is a program in
execution.

When a program is created then it is just some pieces of Bytes which is stored in Hard Disk as a passive entity. Then
the program starts loading in memory and become an active entity, when a program is double-clicked in windows
or entering the name of the executable file on the command line. (i.e. a.out or prog.exe)

TEXT

A process is more than the program code or a code segment is known as Text Section. This section of memory
contains the executable instructions of a program. It also contains constants, macros and it is read-only segment to
prevent accidentally modification of an instruction. It is also sharable so that the so that another process can use this
whenever it is required.

DATA

Next Data Section segment of memory contains the global and static variables that are initialized by the
programmer prior to the execution of a program. This segment is not read-only, as the value of the variables can be
changed at the runtime.

For an Example in C program −

#include<stdio.h>

int b;//will be stored in data section

int main(){

   static int a; //will be stored in data section


   return;

HEAP

To allocate memory for variables whose size cannot be statically determined by the compiler before program
execution, requested by the programmer, there is a requirement of dynamic allocation of memory which is done
in heap segment. It can be only determined at run-time. It is managed via system calls to malloc, calloc, free, delete
etc. An C example: malloc(2) return the starting address of the 2 BYTE block which is in heap area.

STACK

A process generally also includes the process stack, which contains temporary data i.e. function parameters, return
addresses, and local variables. On the standard x86 architecture it grows downwards to lower addresses but on some
other architectures it may grow the opposite direction.it is shown in the diagram that stack grows opposite direction
of heap for avoiding overlapping problem. This section is committed to store all the data needed by a function call in
a program.

A stack pointer register keeps the tracks of the top of the stack i.e., how much of the stack area using by the current
process, and it is modified each time a value is “pushed” onto the stack. If the stack pointer meets the heap pointer
the available free memory is depleted.

Difference between Process and Thread

S.NO Process Thread

Process means any program is in


1. execution. Thread means a segment of a process.

The process takes more time to


2. terminate. The thread takes less time to terminate.

3. It takes more time for creation. It takes less time for creation.

4. It also takes more time for context It takes less time for context switching.
S.NO Process Thread

switching.

The process is less efficient in


5. terms of communication. Thread is more efficient in terms of communication.

We don’t need multi programs in action for multiple


Multiprogramming holds the threads because a single process consists of multiple
6.  concepts of multi-process. threads.

7. The process is isolated. Threads share memory.

The process is called the A Thread is lightweight as each thread in a process shares
8. heavyweight process. code, data, and resources.

Process switching uses an interface Thread switching does not require calling an operating
9. in an operating system. system and causes an interrupt to the kernel.

If one process is blocked then it will


not affect the execution of other If a user-level thread is blocked, then all other user-level
10. processes  threads are blocked. 

The process has its own Process


Control Block, Stack, and Address Thread has Parents’ PCB, its own Thread Control Block, and
11. Space. Stack and common Address space.

Since all threads of the same process share address space


and other resources so any changes to the main thread
Changes to the parent process do may affect the behavior of the other threads of the
12. not affect child processes. process.

13. System call is involved in it. No system call is involved, it is created using APIs.
Operating System - Process Scheduling

Advertisements

Definition

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and
the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one
process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.

Process Scheduling Queues

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each of the process states and
PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB
is unlinked from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to execute. A new
process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS scheduler determines how to
move processes between the ready and run queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below −

S.N. State & Description

1
Running

When a new process is created, it enters into the system as in the running state.

2
Not Running

Processes that are not running are kept in queue, waiting for their turn to execute.
Each entry in the queue is a pointer to a particular process. Queue is implemented
by using linked list. Use of dispatcher is as follows. When a process is interrupted,
that process is transferred in the waiting queue. If the process has completed or
aborted, the process is discarded. In either case, the dispatcher then selects a
process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be
submitted into the system and to decide which process to run. Schedulers are of three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the system for processing. It
selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also
controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems have no long term
scheduler. When a process changes the state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with the chosen set of
criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that
are ready to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are
faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree of
multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot make any progress towards
completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in time It is a part of Time sharing


minimal in time sharing sharing system systems.
system
5 It selects processes from It selects those It can re-introduce the
pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be continued.

What do the terms "CPU bound" and "I/O bound" mean?


A program is CPU bound if it would go faster if the CPU were faster, i.e. it spends the majority of its time
simply using the CPU (doing calculations). A program that computes new digits of π will typically be CPU-
bound, it's just crunching numbers.

A program is I/O bound if it would go faster if the I/O subsystem was faster. Which exact I/O system is
meant can vary; I typically associate it with disk, but of course networking or communication in general
is common too. A program that looks through a huge file for some data might become I/O bound, since
the bottleneck is then the reading of the data from disk (actually, this example is perhaps kind of old-
fashioned these days with hundreds of MB/s coming in from SSDs).

What is Context Switching in Operating System?


Context Switching involves storing the context or state of a process so that it can be reloaded when required and execution can
be resumed from the same point as earlier. This is a feature of a multitasking operating system and allows a single CPU to be
shared by multiple processes.
A diagram that demonstrates context switching is as follows −
In the above diagram, initially Process 1 is running. Process 1 is switched out and Process 2 is switched in because of an
interrupt or a system call. Context switching involves saving the state of Process 1 into PCB1 and loading the state of process 2
from PCB2. After some time again a context switch occurs and Process 2 is switched out and Process 1 is switched in again.
This involves saving the state of Process 2 into PCB2 and loading the state of process 1 from PCB1.

Context Switching Triggers


There are three major triggers for context switching. These are given as follows −
 Multitasking: In a multitasking environment, a process is switched out of the CPU so another process can be run. The
state of the old process is saved and the state of the new process is loaded. On a pre-emptive system, processes may be
switched out by the scheduler.
 Interrupt Handling: The hardware switches a part of the context when an interrupt occurs. This happens automatically.
Only some of the context is changed to minimize the time required to handle the interrupt.
 User and Kernel Mode Switching: A context switch may take place when a transition between the user mode and kernel
mode is required in the operating system.

Context Switching Steps


The steps involved in context switching are as follows −
 Save the context of the process that is currently running on the CPU. Update the process control block and other important
fields.
 Move the process control block of the above process into the relevant queue such as the ready queue, I/O queue etc.
 Select a new process for execution.
 Update the process control block of the selected process. This includes updating the process state to running.
 Update the memory management data structures as required.
 Restore the context of the process that was previously running when it is loaded again on the processor. This is done by
loading the previous values of the process control block and registers.

Context Switching Cost


Context Switching leads to an overhead cost because of TLB flushes, sharing the cache between multiple tasks, running
the task scheduler etc. Context switching between two threads of the same process is faster than between two different

processes as threads have the same virtual memory maps. Because of this TLB flushing is not required.

Maximum number of Zombie process a system can handle


 Difficulty Level : Easy
 Last Updated : 07 May, 2019
Zombie Process or Defunct Process are those Process which has completed their execution by exit() system call but still has an entry
in Process Table. It is a process in terminated state.
When child process is created in UNIX using fork() system call, then if somehow parent process were not available to reap child process
from Process Table, then this situation arise. Basically, Zombie Process is neither completely dead nor completely alive but it has
having some state in between.
Since, there is an entry for all the process in process table, even for Zombie Processes. It is obvious that size of process table is Finite.
So, if zombie process is created in large amount, then Process Table will get filled up and program will stop without completing their
task.

THREAD CONCEPT

Process Thread

A Process simply means any program in execution. Thread simply means a segment of a process.

The process consumes more resources Thread consumes fewer resources.

Thread requires comparatively less time for creation


The process requires more time for creation.
than process.

The process is a heavyweight process Thread is known as a lightweight process

The process takes more time to terminate The thread takes less time to terminate.

A thread mainly shares the data segment, code


Processes have independent data and code segments
segment, files, etc. with its peer threads.

The process takes more time for context switching. The thread takes less time for context switching.

Communication between processes needs more time as Communication between threads needs less time as
compared to thread. compared to processes.
Process Thread

For some reason, if a process gets blocked then the In case if a user-level thread gets blocked, all of its
remaining processes can continue their execution peer threads also get blocked.

Advantages of Thread
1. Responsiveness
2. Resource sharing, hence allowing better utilization of resources.
3. Economy. Creating and managing threads becomes easier.
4. Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be distributed
over a series of processors to scale.
5. Context Switching is smooth. Context switching refers to the procedure followed by the CPU to
change from one task to another.
6. Enhanced Throughput of the system. Let us take an example for this: suppose a process is divided
into multiple threads, and the function of each thread is considered as one job, then the number
of jobs completed per unit of time increases which then leads to an increase in the throughput of
the system.

Types of Thread
There are two types of threads:

1. User Threads
2. Kernel Threads

User threads are above the kernel and without kernel support. These are the threads that application
programmers use in their programs.

Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel-level
threads, allowing the kernel to perform multiple simultaneous tasks and/or to service multiple kernel
system calls simultaneously.

Let us now understand the basic difference between User level Threads and Kernel level threads:
User Level threads Kernel Level Threads

These threads are implemented by These threads are implemented by


users. Operating systems

These threads are not recognized by These threads are recognized by


operating systems, operating systems,

In User Level threads, the Context In Kernel Level threads, hardware


switch requires no hardware support. support is needed.

These threads are mainly designed as These threads are mainly designed as
dependent threads. independent threads.

On the other hand, if one kernel thread


In User Level threads, if one user-level
performs a blocking operation then
thread performs a blocking operation
another thread can continue the
then the entire process will be blocked.
execution.

Example of User Level threads: Java Example of Kernel level threads:


thread, POSIX threads. Window Solaris.

While the Implementation of the kernel-


Implementation of User Level thread is
level thread is done by the operating
done by a thread library and is easy.
system and is complex.

This thread is generic in nature and can


This is specific to the operating system.
run on any operating system.

Multithreading Models
The user threads must be mapped to kernel threads, by one of the following strategies:

 Many to One Model

 One to One Model

 Many to Many Model

Many to One Model

 In the many to one model, many user-level threads are all mapped onto a single kernel thread.

 Thread management is handled by the thread library in user space, which is efficient in nature.

 In this case, if user-level thread libraries are implemented in the operating system in some way
that the system does not support them, then the Kernel threads use this many-to-one
relationship model.

One to One Model

 The one to one model creates a separate kernel thread to handle each and every user thread.

 Most implementations of this model place a limit on how many threads can be created.

 Linux and Windows from 95 to XP implement the one-to-one model for threads.

 This model provides more concurrency than that of many to one Model.

Many to Many Model

 The many to many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads, combining the best features of the one-to-one and many-to-one
models.

 Users can create any number of threads.

 Blocking the kernel system calls does not block the entire process.

 Processes can be split across multiple processors.


What are Thread Libraries?
Thread libraries provide programmers with API for the creation and management of threads.

Thread libraries may be implemented either in user space or in kernel space. The user space involves API
functions implemented solely within the user space, with no kernel support. The kernel space involves
system calls and requires a kernel with thread library support.

Three types of Thread

1. POSIX Pitheads may be provided as either a user or kernel library, as an extension to the POSIX
standard.

2. Win32 threads are provided as a kernel-level library on Windows systems.

3. Java threads: Since Java generally runs on a Java Virtual Machine, the implementation of threads
is based upon whatever OS and hardware the JVM is running on, i.e. either Pitheads or Win32
threads depending on the system.

Multithreading Issues
Below we have mentioned a few issues related to multithreading. Well, it's an old saying, All good things,
come at a price.

Thread Cancellation
Thread cancellation means terminating a thread before it has finished working. There can be two
approaches for this, one is Asynchronous cancellation, which terminates the target thread immediately.
The other is Deferred cancellation allows the target thread to periodically check if it should be canceled.

Signal Handling
Signals are used in UNIX systems to notify a process that a particular event has occurred. Now in when a
Multithreaded process receives a signal, to which thread it must be delivered? It can be delivered to all or
a single thread.

fork() System Call


fork() is a system call executed in the kernel through which a process creates a copy of itself. Now the
problem in the Multithreaded process is, if one thread forks, will the entire process be copied or not?

Security Issues
Yes, there can be security issues because of the extensive sharing of resources between multiple threads.

There are many other issues that you might face in a multithreaded process, but there are appropriate
solutions available for them. Pointing out some issues here was just to study both sides of the coin.
Benefits of Multithreading in Operating System
 Difficulty Level : Medium
 Last Updated : 14 Aug, 2019
Prerequisite – Operating-System-Thread
The benefits of multi threaded programming can be broken down into four major categories:
1. Responsiveness –
Multithreading in an interactive application may allow a program to continue running even if a part of it is blocked or is performing a
lengthy operation, thereby increasing responsiveness to the user.
In a non multi threaded environment, a server listens to the port for some request and when the request comes, it processes the request
and then resume listening to another request. The time taken while processing of request makes other users wait unnecessarily.
Instead a better approach would be to pass the request to a worker thread and continue listening to port.

For example, a multi threaded web browser allow user interaction in one thread while an video is being loaded in another thread. So
instead of waiting for the whole web-page to load the user can continue viewing some portion of the web-page.

2. Resource Sharing –
Processes may share resources only through techniques such as-
 Message Passing
 Shared Memory
Such techniques must be explicitly organized by programmer. However, threads share the memory and the resources of the process to
which they belong by default.
The benefit of sharing code and data is that it allows an application to have several threads of activity within same address space.

3. Economy –
Allocating memory and resources for process creation is a costly job in terms of time and space.
Since, threads share memory with the process it belongs, it is more economical to create and context switch threads. Generally much
more time is consumed in creating and managing processes than in threads.
In Solaris, for example, creating process is 30 times slower than creating threads and context switching is 5 times slower.
4. Scalability –
The benefits of multi-programming greatly increase in case of multiprocessor architecture, where threads may be running parallel on
multiple processors. If there is only one thread then it is not possible to divide the processes into smaller tasks that different
processors can perform.
Single threaded process can run only on one processor regardless of how many processors are available.
Multi-threading on a multiple CPU machine increases parallelism.
The main purpose of multithreading is to provide simultaneous execution of two or more parts of a program that can run concurrently.

Threads are independent. If an exception occurs in one thread, it doesn’t affect the others.

Some multithreaded applications would be:

1. Web Browsers - A web browser can download any number of files and web pages (multiple tabs) at the same time and still lets
you continue browsing. If a particular web page cannot be downloaded, that is not going to stop the web browser from
downloading other web pages.
2. Web Servers - A threaded web server handles each request with a new thread. There is a thread pool and every time a new
request comes in, it is assigned to a thread from the thread pool.
3. Computer Games - You have various objects like cars, humans, birds which are implemented as separate threads. Also playing
the background music at the same time as playing the game is an example of multithreading.
4. Text Editors - When you are typing in an editor, spell-checking, formatting of text and saving the text are done concurrently by
multiple threads. The same applies for Word processors also.
5. IDE - IDEs like Android Studio run multiple threads at the same time. You can open multiple programs at the same time. It
also gives suggestions on the completion of a command which is a separate thread.
112

Optimal number of thread per cores


f your threads don't do I/O, synchronization, etc., and there's nothing else running, 1 thread per core will get you the best performance.
However that very likely not the case. Adding more threads usually helps, but after some point, they cause some performance degradation.
Multi-Core Processors are a Headache
for Multithreaded Code
multithreading is really a dual-purpose technology. Threads are used to build parallel software for multi-processor
systems, and they are also used to asynchronously handle interactions with other software and the real world. The
latter use case is relevant even if the software is running on a single processor, which is why lots of multithreaded
code existed before multi-processor systems were common.

Why are static variable considered evil


tatic variables represent global state. That's hard to reason about and hard to test: if I create a new instance of an object, I can reason about
its new state within tests. If I use code which is using static variables, it could be in any state - and anything could be modifying it.

I could go on for quite a while, but the bigger concept to think about is that the tighter the scope of something, the easier it is to reason
about. We're good at thinking about small things, but it's hard to reason about the state of a million line system if there's no modularity. This
applies to all sorts of things, by the way - not just static variables.

You might also like