OS Notes 1
OS Notes 1
Operating System
Study Material
For
BSc-Computer Science
Prepared By
Prof.P.Sankar.,M.C.A.,M.Phil.,(P.hD ).,
Faculty of Computer Science & Application
2023
Prof.Sankar P MCA.,Mphi;,(PhD) Page 1
Operating System 2023
OPERATING SYSTEM
Unit – I Operating System Basic
UNIT III Memory Management – Basics Concept of Memory – Address Binding – Logical
and Physical Address Space – Memory Partitioning – Memory Allocation –Paging –
Segmentation – Segmentation and Paging – Protection –Fragmentation – Compaction –
Demand Paging – Page Replacement Algorithm– Classification of Page Replacement
Algorithm.
UNIT-1V FILE SYSTEM :File System Storage – File Concept– File Access Methods –
Directory Structure – File Sharing – File Protection – File System Implementation – File
System Structure – Allocation Methods – Free Space Management – Mass Storage Structure
– Disk structure – Disk Scheduling and Management – RAID Levels.
Operating System
Software:
• System software
system software is the set of programs that control the activities and functions of the
various hardware components, programming tools and abstractions, and other utilities to
monitor the state of the computer system. The most common examples of operatingsystems
are Linux, Unix, Windows, MacOS, and OS/2.
• Application software:
Application software are the user programs and consists of those programs that
solve specific problems for the users and execute under the control of the operating system.
Application programs are developed by individuals and organizations for solving specific
problems
computers. They allow multiple users to access shared resources and communicate
with each other over the network. Examples include Microsoft Windows Server and
various distributions of Linux designed for servers.
Network Operating System: Network Operating System is a type of operating
system that runs on a server and provides the capability to manage data, users,
groups, security, applications, and other networking functions.
Real-time Operating System: Real-time Operating System is a type of operating
system that serves a real-time system and the time interval required to process and
respond to inputs is very small. These operating systems are designed to respond to
events in real time. They are used in applications that require quick and
deterministic responses, such as embedded systems, industrial control systems, and
robotics.
Multi-User Operating Systems: Multi-User Operating Systems are designed to
support multiple users simultaneously. Examples include Linux and Unix.
Embedded Operating Systems: Embedded Operating Systems are designed to run
on devices with limited resources, such as smartphones, wearable devices, and
household appliances. Examples include Google’s Android and Apple’s iOS.
Operating system is software that acts as an intermediary between the user and computer
hardware. It is a program with the help of which we are able to run various applications.
Program Execution
File Management
The operating system helps in managing files also. If a program needs access to a file, it is
the operating system that grants access. These permissions include read-only, read-write,
etc. It also provides a platform for the user to create, and delete files., i.e, floppy disk/hard
disk/pen drive, etc. The Operating System decides how the data should be manipulated
and stored.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output
devices, etc. It also ensures that an error does not occur frequently and fixes the errors. It
also prevents the process from coming to a deadlock.
Resource Management
System resources are shared between various processes. It is the Operating system that
manages resource sharing. It also manages the CPU time among processes using CPU
Scheduling Algorithms. It also helps in the memory management of the system. It also
controls input-output devices..
Security
operating system ensures that all access to system resources must be monitored and
controlled. It also ensures that the external resources or peripherals must be protected from
invalid access. It provides authentication by using usernames and passwords.
Networking
System Utilities
These are a set of tools and applications that provide additional functionality to the OS,
such as backup and recovery, system optimization, and diagnostic tools.
User Interface
User interface is essential and all operating systems provide it. Users either interface with
the operating system through the command-line interface or graphical user interface or
GUI..A GUI offers the user a mouse-based window and menu system as an interface.
Process Manager
Memory Manager
The memory manager controls the allocation and de allocation of memory. It imposes
certain policies and mechanisms for memory management. This component also includes
policies and mechanisms for memory protection. Relevant memory management schemes
are paging, segmentation, and virtual memory.
Resource Manager
File Manager
The file manager allows users and processes to create and delete files and directories. In
most modern operating systems, files are associated with mass storage devices such as
magnetic tapes and disks. Data can be read and/or written to a file using functions such as
open file, read data from file, write data to file, and close file.
Device Manager
The device manager operates in combination with the resource manager and file manager.
Usually, user processes are not provided direct access to system resources. Instead,
processes request services to the file manager and/or the resource manager.
I/O Structure:
Programmed I/O
In the programmed I/O when we write the input then the device should be ready to take the
data otherwise the program should wait for some time so that the device or buffer will be
free then it can take the input.Once the input is taken then it will be checked whether the
output device or output buffer is free then it will be printed. This process is continued every
time in transferring of the data.
I/O Interrupts
To initiate any I / O operation, the CPU first loads the registers to the device controller.
Then the device controller checks the contents of the registers to determine what operation
to perform.
DMA Structure
Direct Memory Access (DMA) is a method of handling I / O. Here the device controller
directly communicates with memory without CPU involvement.After setting the resources of
I/O devices like buffers, pointers, and counters, the device controller transfers blocks of
data directly to storage without CPU intervention.DMA is generally used for high speed I /
O devices.
Storage structure
Example:
Systems designed to store enormous volumes of data are referred to as mass storage
devices. Massive storage devices are sometimes used interchangeably with peripheral
storage
The earliest and most basic mass storage techniques date back to the era of main frame
supercomputers, according to experts.
1. Magnetic Disks
2. Solid State Disks
3. Magnetic Tapes
System Call
A system call is a mechanism used by programs to request services from the operating
system (OS).
A system call is initiated by the program executing a specific instruction, which triggers a
switch to kernel mode, allowing the program to request a service from the OS. The OS then
handles the request, performs the necessary operations, and returns the result back to the
program.
The kernel carries out the requested action, such as creating or deleting a file, if the request
is approved. The output of the kernel is provided to the application as input. Once the input
is received, the application continues the process.
Design and implementation is a necessary part of an operating system and this technique
can be used by every user who uses a computer.
There are different types of techniques to design and implement the operating system.
Design goals
Mechanism
Implementation
Design goals
Concurrent Systems
Operating systems must handle multiple devices as well as multiple users concurrently. It
is a must for modern multiple core architectures. Due to these features the design of the
operating system is complex and very difficult to make.
Operating systems must provide security and privacy to a system. It is important to prevent
the malicious user from accessing your system and to prevent the stealing of the user
programs.
Resource Sharing
Operating system ensures that the resources of the system must be shared in a correct
fashion in between multiple user processes. It becomes more complex when multiple users
use the same device.
Operating systems must be flexible in order to accommodate any change to the hardware
and software of the system, It should not be obsolete. It is necessary as it is costly to
change the operating system again and again on any change to the software or hardware.
Operating system that is able to work with different hardware and systems is called a
portable operating system and it is a very important design goal.
Backward Compatibility
Any upgrade to the current operating system could not hinder it's compatibility with the
machine i.e. if the previous version of the operating system is compatible with the system
then the newer or upgraded version should also be compatible with the system this is called
backward compatibility.
Mechanism
When a task is performed in the operating system then we follow a particular mechanism
for input, store, process, and output and by using this process we can define memory to the
different tasks performed by the computer.
An Operating system provides the services to users and programs. Like I/O operations,
program execution, file system manipulation, Resource allocation, Protection.
Program Execution
OS handles many activities from user programs to system programs like printer spooler,
name servers, file servers etc. Each of these activities is encapsulated as a process. A
process includes the complete execution context.
Implementation
Process
A process is a simple program. An active program which running now on the Operating
System is known as the process.
When a program is loaded into the memory and it becomes a process, it can be divided into
four sections ─ stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory −
Stack
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
Heap This is dynamically allocated memory to a process during its run time.\
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers
Process States:
Start
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but interrupted by the scheduler
to assign CPU to some other process.
Running
Once the process has been assigned to a processor by the OS scheduler, the process state
is set to running and the processor executes its instructions
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the
information needed to keep track of a process as listed below in the table −
Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
Process ID Unique identification for each of the process in the operating system.
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.
CPU registers
Various CPU registers where process need to be stored for execution for running state.
Process priority and other scheduling information which is required to schedule the
process.
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
IO status information
A PCB contains all the necessary information about a process, including its process state,
program counter, memory allocation, open files, and CPU scheduling information.
The main purpose of a PCB is to enable the OS to manage multiple processes efficiently by
keeping track of the state of each process and allocating system resources accordingly.
When a process is created, the OS creates a PCB for that process and stores all the
necessary information about the process in it. The OS then uses the information in the PCB
to manage the process and ensure that it runs efficiently.
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
Categories of Scheduling
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.).
Operating Process:
Creation
This is the initial step of the process execution activity. Process creation means the
construction of a new process for execution. This might be performed by the system, the
user, or the old process itself. There are several events that lead to the process creation.
Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to run. It
means the operating system puts the process from the ready state into the running state.
Dispatching is done by the operating system when the resources are free or the process has
higher priority than the ongoing process..
Blocking
When a process invokes an input-output system call that blocks the process, and operating
system is put in block mode. Block mode is basically a mode where the process waits for
input-output. Hence on the demand of the process itself, the operating system blocks the
process and dispatches another process to the processor. Hence, in process-blocking
operations, the operating system puts the process in a ‘waiting’ state.
Preemption
The operating system preempts the process. This operation is only valid where CPU
scheduling supports preemption. Basically, this happens in priority scheduling where on
the incoming of high priority process the ongoing process is preempted. Hence, in process
preemption operation, the operating system puts the process in a ‘ready’ state.
Process Termination
Process termination is the activity of ending the process. In other words, process
termination is the relaxation of computer resources taken by the process for the execution.
Like creation, in termination also there may be several events that may lead to the process
of termination
IPC lets different programs run in parallel, share data, and communicate with each other.
It’s important for two reasons: First, it speeds up the execution of tasks, and secondly, it
ensures that the tasks run correctly and in the order that they were executed.
Message passing
Another important way that inter-process communication takes place with other processes
is via message passing. When two or more processes participate in inter-process
communication, each process sends messages to the others via Kernel. Here is an example
of sending messages between two processes: – Here, the process sends a message like “M”
to the OS kernel. This message is then read by Process B. A communication link is required
between the two processes for successful message exchange.
Shared Memory
Shared memory is a memory shared between all processes by two or more processes
established using shared memory. This type of memory should protect each other by
synchronizing access between all processes. Both processes like A and B can set up a
shared memory segment and exchange data through this shared memory area.
process A wants to communicate to process B, and needs to attach its address space to
this shared memory segment. Process A will write a message to the shared memory and
Process B will read that message from the shared memory. So processes are responsible for
ensuring synchronization so that both processes do not write to the same location at the
same time.
SIGNAL :
A process can send a signal to another process. A signal also allows a process to interrupt
another process. A signal is a way of communicating between processes.
PIPES :
Pipes are a type of data channel commonly used for one-way communication between two
processes. Because this is a half-duplex technique, additional lines are required to achieve
a full duplex. The two pipes create a bidirectional data channel between the two processes.
But one pipe creates a unidirectional data channel. Pipes are primarily used on Windows
operating systems.
Client/Server communication
Client/Server communication involves two components, namely a client and a server. They
are usually multiple clients in communication with a single server. The clients send
requests to the server and the server responds to the client requests
Sockets
Sockets facilitate communication between two processes on the same machine or different
machines. They are used in a client/server framework and consist of the IP address and
port number. Many application protocols use sockets for data connection and data transfer
between a client and a server.
A client has a request that the RPC translates and sends to the server. This request may be
a procedure or a function call to a remote server. When the server receives the request, it
sends the required response back to the client.
Pipes
These are interprocess communication methods that contain two end points. Data is
entered from one end of the pipe by a process and consumed from the other end by the
other process.
The two different types of pipes are ordinary pipes and named pipes. Ordinary pipes only
allow one way communication. For two way communication, two pipes are required.
Ordinary pipes have a parent child relationship between the processes as the pipes can only
be accessed by processes that created or inherited them.
Named pipes are more powerful than ordinary pipes and allow two way communications.
These pipes exist even after the processes using them have terminated
THREADS
Priority can be assigned to the threads just like the process, and the highest priority thread
is scheduled first.
MULTITHREAD
Types of Threads
User-level threads can be easily implemented by the user. In case when user-level
threads are single-handed processes, kernel-level thread manages them.
A kernel Level Thread is a type of thread that can recognize the Operating system
easily. Kernel Level Threads has its own thread table where it keeps track of the
system. The operating System Kernel helps in managing threads