[go: up one dir, main page]

0% found this document useful (0 votes)
11 views17 pages

Os Mid Chapter

The document discusses operations on processes in an operating system, including creation, scheduling, blocking, preemption, and termination, which are essential for managing program execution and resource allocation. It also covers Inter-Process Communication (IPC) methods, such as shared memory and message passing, which enable processes to communicate and synchronize their activities. Additionally, the document explains the concept of threads, their types, and the importance of synchronization in IPC.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views17 pages

Os Mid Chapter

The document discusses operations on processes in an operating system, including creation, scheduling, blocking, preemption, and termination, which are essential for managing program execution and resource allocation. It also covers Inter-Process Communication (IPC) methods, such as shared memory and message passing, which enable processes to communicate and synchronize their activities. Additionally, the document explains the concept of threads, their types, and the importance of synchronization in IPC.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Operations on process

Operations on Processes
Last Updated : 28 Dec, 2024


Process operations refer to the actions or activities performed on


processes in an operating system. These operations include creating,
terminating, suspending, resuming, and communicating between
processes. Operations on processes are crucial for managing and
controlling the execution of programs in an operating system.
Operations on processes are fundamental to the functioning of
operating systems, enabling effective flow of program execution and
resource allocation. The lifecycle of a process includes several critical
operations: creation, scheduling, blocking, preemption, and
termination. Each operation plays a vital role In ensuring that
processes are efficiently managed, allowing for multitasking and
optimal resource utilization. In this article, we will discuss various
operations on Process.
What is a Process?
A process is an activity of executing a program. It is a program under
execution. Every process needs certain resources to complete its task.
Processes are the programs that are dispatched from the ready state
and are scheduled in the CPU for execution. PCB (Process Control
Block) holds the context of the process. A process can create other
processes which are known as Child Processes. The process takes
more time to terminate, and it is isolated means it does not share the
memory with any other process. The process can have the
following states new, ready, running, waiting, terminated, and
suspended.
 Text : A Process, sometimes known as the Text Section, also
includes the current activity represented by the value of
the Program Counter.
 Stack : The stack contains temporary data, such as function
parameters, returns addresses, and local variables.
 Data : Contains the global variable.
 Heap : Dynamically memory allocated to process during its
run time.
Operation on a Process
The execution of a process is a complex activity. It involves various
operations. Following are the operations that are performed while
execution of a process:
1. Creation
This is the initial step of the process execution activity. Process
creation means the construction of a new process for execution. This
might be performed by the system, the user, or the old process itself.
There are several events that lead to the process creation. Some of the
such events are the following:
 When we start the computer, the system creates several
background processes.
 A user may request to create a new process.
 A process can create a new process itself while executing.
 The batch system takes initiation of a batch job.
2. Scheduling/Dispatching
The event or activity in which the state of the process is changed from
ready to run. It means the operating system puts the process from the
ready state into the running state. Dispatching is done by the
operating system when the resources are free or the process has
higher priority than the ongoing process. There are various other cases
in which the process in the running state is preempted and the process
in the ready state is dispatched by the operating system.
3. Blocking
When a process invokes an input-output system call that blocks the
process, and operating system is put in block mode. Block mode is
basically a mode where the process waits for input-output. Hence on
the demand of the process itself, the operating system blocks the
process and dispatches another process to the processor. Hence, in
process-blocking operations, the operating system puts the process in
a ‘waiting’ state.
4. Preemption
When a timeout occurs that means the process hadn’t been
terminated in the allotted time interval and the next process is ready
to execute, then the operating system preempts the process. This
operation is only valid where CPU scheduling supports preemption.
Basically, this happens in priority scheduling where on the incoming of
high priority process the ongoing process is preempted. Hence, in
process preemption operation, the operating system puts the process
in a ‘ready’ state.
5. Process Termination
Process termination is the activity of ending the process. In other
words, process termination is the relaxation of computer resources
taken by the process for the execution. Like creation, in termination
also there may be several events that may lead to the process of
termination. Some of them are:
 The process completes its execution fully and it indicates to
the OS that it has finished.
 The operating system itself terminates the process due to
service errors.
 There may be a problem in hardware that terminates the
process.
Conclusion
Operations on processes are crucial for managing and
controlling program execution in an operating system. These activities,
which include creation, scheduling, blocking, preemption, and
termination, allow for more efficient use of system resources and
guarantee that processes run smoothly. Understanding these
procedures is critical to improving system performance and
dependability.so these processes help our computers do many things
at once without crashing or slowing down.

Inter Process Communication (IPC)


Last Updated : 08 Jan, 2025


Processes need to communicate with each other in many situations,


for example, to count occurrences of a word in text file, output of grep
command needs to be given to wc command, something like grep -o -i
<word> <file> | wc -l. Inter-Process Communication or IPC is a
mechanism that allows processes to communicate. It helps processes
synchronize their activities, share information, and avoid conflicts
while accessing shared resources.
Types of Process
Let us first talk about types of types of processes.
 Independent process: An independent process is not
affected by the execution of other processes. Independent
processes are processes that do not share any data or
resources with other processes. No inte-process
communication required here.
 Co-operating process: Interact with each other and share
data or resources. A co-operating process can be affected by
other executing processes. Inter-process communication (IPC)
is a mechanism that allows processes to communicate with
each other and synchronize their actions. The communication
between these processes can be seen as a method of
cooperation between them.
Inter Process Communication
Inter process communication (IPC) allows different programs or
processes running on a computer to share information with each
other. IPC allows processes to communicate by using different
techniques like sharing memory, sending messages, or using files. It
ensures that processes can work together without interfering with
each other. Cooperating processes require an Inter Process
Communication (IPC) mechanism that will allow them to exchange
data and information.
The two fundamental models of Inter Process Communication are:
 Shared Memory
 Message Passing
Figure 1 below shows a basic structure of communication between
processes via the shared memory method and via the message
passing method.
An operating system can implement both methods of communication.
First, we will discuss the shared memory methods of communication
and then message passing. Communication between processes using
shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it. One
way of communication using shared memory can be imagined like
this: Suppose process1 and process2 are executing simultaneously,
and they share some resources or use some information from another
process. Process1 generates information about certain computations
or resources being used and keeps it as a record in shared memory.
When process2 needs to use the shared information, it will check in
the record stored in shared memory and take note of the information
generated by process1 and act accordingly. Processes can use shared
memory for extracting information as a record from another process
as well as for delivering any specific information to other processes.
Figure 1 below shows a basic structure of communication between
processes via the shared memory method and via the message
passing method.

Let’s discuss an example of communication between processes using


the shared memory method.
Methods in Inter process Communication
Inter-Process Communication refers to the techniques and methods
that allow processes to exchange data and coordinate their activities.
Since processes typically operate independently in a multitasking
environment, IPC is essential for them to communicate effectively
without interfering with one another. There are several methods of
IPC, each designed to suit different scenarios and requirements.
These methods include shared memory, message passing,
semaphores, and signals, etc.
given access to the same region of memory. This shared memory
allows the processes to communicate with each other by reading and
writing data directly to that memory area.
Shared Memory in IPC can be visualized as Global variables in a
program which are shared in the entire program but shared memory
in IPC goes beyond global variables, allowing multiple processes to
share data through a common memory space, whereas global
variables are restricted to a single process.
To read more refer – IPC through Shared Memory
Message Passing
IPC through Message Passing is a method where processes
communicate by sending and receiving messages to exchange data.
In this method, one process sends a message, and the other process
receives it, allowing them to share information. Message Passing can
be achieved through different methods like Sockets, Message Queues
or Pipes.
Sockets provide an endpoint for communication, allowing processes to
send and receive messages over a network. In this method, one
process (the server) opens a socket and listens for incoming
connections, while the other process (the client) connects to the
server and sends data. Sockets can use different communication
protocols, such as TCP(Transmission Control Protocol) for reliable,
connection-oriented communication or UDP (User Datagram Protocol)
for faster, connectionless communication.
To read more refer – IPC using Message Queues
Different methods of Inter process Communication (IPC) are as
follows:
1. Pipes – A pipe is a unidirectional communication channel
used for IPC between two related processes. One process
writes to the pipe, and the other process reads from it.
Types of Pipes are Anonymous Pipes and Named Pipes (FIFOs)
2. Sockets – Sockets are used for network communication
between processes running on different hosts. They provide a
standard interface for communication, which can be used
across different platforms and programming languages.
3. Shared memory – In shared memory IPC, multiple processes
are given access to a common memory space. Processes can
read and write data to this memory, enabling fast
communication between them.
4. Semaphores – Semaphores are used for controlling access
to shared resources. They are used to prevent multiple
processes from accessing the same resource simultaneously,
which can lead to data corruption.
5. Message Queuing – This allows messages to be passed
between processes using either a single queue or several
message queue. This is managed by system kernel these
messages are coordinated using an API.
Inter Process Communication across the
System
Inter-Process Communication (IPC) across the system refers to the
methods that allow processes to communicate and exchange data,
even when they are running on different machines or in a distributed
environment.
Using Remote Procedure calls
Remote Procedure Calls (RPC) allows a program to call a procedure
(or function) on another machine in a network, as though it were a
local call. It abstracts the details of communication and makes
distributed systems easier to use. RPC is a technique used for
distributed computing. It allows processes running on different hosts
to call procedures on each other as if they were running on the same
host.
Using Remote Method Invocation
Remote Method Invocation (RMI) is a Java-based technique used for
Inter-Process Communication (IPC) across systems, specifically for
calling methods on objects located on remote machines. It allows a
program running on one computer (the client) to execute a method on
an object residing on another computer (the server), as if it were a
local method call.
Each method of IPC has its own advantages and disadvantages, and
the choice of which method to use depends on the specific
requirements of the application. For example, if high-speed
communication is required between processes running on the same
host, shared memory may be the best choice. On the other hand, if
communication is required between processes running on different
hosts, sockets or RPC may be more appropriate.

Role of Synchronization in IPC


In IPC, synchronization is essential for controlling access to shared
resources and guaranteeing that processes do not conflict with one
another. Data consistency is ensured and problems like race
situations are avoided with proper synchronization.
Advantages of IPC
 Enables processes to communicate with each other and share
resources, leading to increased efficiency and flexibility.
 Facilitates coordination between multiple processes, leading
to better overall system performance.
 Allows for the creation of distributed systems that can span
multiple computers or networks.
 Can be used to implement varioussynchronization and
communication protocols, such as semaphores, pipes, and
sockets.
Disadvantages of IPC
 Increases system complexity, making it harder to design,
implement, and debug.
 Can introduce security vulnerabilities, as processes may be
able to access or modify data belonging to other processes.
 Requires careful management of system resources, such as
memory andCPU time, to ensure that IPC operations do not
degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to
access or modify the same data at the same time.
 Overall, the advantages of IPC outweigh the disadvantages,
as it is a necessary mechanism for modern operating systems
and enables processes to work together and share resources
in a flexible and efficient manner. However, care must be
taken to design and implement IPC systems carefully, in order
to avoid potential security vulnerabilities and performance
issues.

Thread in Operating System


Last Updated : 24 Jan, 2025


A thread is a single sequence stream within a process. Threads are


also calledlightweight processes as they possess some of the
properties of processes. Each thread belongs to exactly one process.
 In an operating system that supports multithreading, the
process can consist of many threads. But threads can be
effective only if the CPU is more than 1 otherwise two threads
have to context switch for that single CPU.
 All threads belonging to the same process share – code
section, data section, and OS resources (e.g. open files and
signals)
 But each thread has its own (thread control block) – thread ID,
program counter, register set, and a stack
 Any operating system process can execute a thread. we can
say that single process can have multiple threads.
Why Do We Need Thread?
 Threads run in parallel improving the application performance.
Each such thread has its own CPU state and stack, but they
share the address space of the process and the environment.
 Threads can share common data so they do not need to
use inter-process communication. Like the processes, threads
also have states like ready, executing, blocked, etc.
 Priority can be assigned to the threads just like the process,
and the highest priority thread is scheduled first.
 Each thread has its own Thread Control Block (TCB). Like the
process, a context switch occurs for the thread, and register
contents are saved in (TCB). As threads share the same
address space and resources, synchronization is also required
for the various activities of the thread.
Components of Threads
These are the basic components of the Operating System.
 Stack Space: Stores local variables, function calls, and return
addresses specific to the thread.
 Register Set: Hold temporary data and intermediate results
for the thread’s execution.
 Program Counter: Tracks the current instruction being
executed by the thread.
Types of Thread in Operating System
Threads are of two types. These are described below.
 User Level Thread
 Kernel Level Thread
Threads

1. User Level Thread


User Level Thread is a type of thread that is not created using system
calls. The kernel has no work in the management of user-level threads.
User-level threads can be easily implemented by the user. In case
when user-level threads are single-handed processes, kernel-level
thread manages them. Let’s look at the advantages and disadvantages
of User-Level Thread.
Advantages of User-Level Threads
 Implementation of the User-Level Thread is easier than Kernel
Level Thread.
 Context SwitchTime is less in User Level Thread.
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register
Set, and Stack Space, it has a simple representation.
Disadvantages of User-Level Threads
 The operating system is unaware of user-level threads, so
kernel-level optimizations, like load balancing across CPUs, are
not utilized.
 If a user-level thread makes a blocking system call, the entire
process (and all its threads) is blocked, reducing efficiency.
 User-level thread scheduling is managed by the application,
which can become complex and may not be as optimized as
kernel-level scheduling.
2. Kernel Level Threads
A kernel Level Thread is a type of thread that can recognize the
Operating system easily. Kernel Level Threads has its own thread table
where it keeps track of the system. The operating System Kernel helps
in managing threads. Kernel Threads have somehow longer context
switching time. Kernel helps in the management of threads.
Advantages of Kernel-Level Threads
 Kernel-level threads can run on multiple processors or cores
simultaneously, enabling better utilization of multicore
systems.
 The kernel is aware of all threads, allowing it to manage and
schedule them effectively across available resources.
 Applications that block frequency are to be handled by the
Kernel-Level Threads.
 The kernel can distribute threads across CPUs, ensuring
optimal load balancing and system performance.
Disadvantages of Kernel-Level threads
 Context switching between kernel-level threads is slower
compared to user-level threads because it requires mode
switching between user and kernel space.
 Managing kernel-level threads involves frequent system calls
and kernel interactions, leading to increased CPU overhead.
 A large number of threads may overload the kernel scheduler,
leading to potential performance degradation in systems with
many threads.
 Implementation of this type of thread is a little more complex
than a user-level thread.
For difference click on link Difference Between User-Level Thread and
Kernel-Level Thread.
Difference Between Process and Thread
The primary difference is that threads within the same process run in a
shared memory space, while processes run in separate memory
spaces. Threads are not independent of one another like processes
are, and as a result, threads share with other threads their code
section, data section, and OS resources (like open files and signals).
But, like a process, a thread has its own program counter (PC), register
set, and stack space.
For more, refer to Difference Between Process and Thread.
What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve
parallelism by dividing a process into multiple threads. For example, in
a browser, multiple tabs can be different threads. MS Word uses
multiple threads: one thread to format the text, another thread to
process inputs, etc. More advantages of multithreading are discussed
below.
Multithreading is a technique used in operating systems to improve the
performance and responsiveness of computer systems. Multithreading
allows multiple threads (i.e., lightweight processes) to share the same
resources of a single process, such as the CPU, memory, and I/O
devices.

Single Threaded vs Multi-threaded Process

Multithreading can be done without OS support, as seen in Java’s


multithreading model. In Java, threads are implemented using the Java
Virtual Machine (JVM), which provides its own thread management.
These threads, also called user-level threads, are managed
independently of the underlying operating system.
Application itself manages the creation, scheduling, and execution of
threads without relying on the operating system’s kernel. The
application contains a threading library that handles thread creation,
scheduling, and context switching. The operating system is unaware of
User-Level threads and treats the entire process as a single-threaded
entity.
Benefits of Thread in Operating System
 Responsiveness: If the process is divided into multiple
threads, if one thread completes its execution, then its output
can be immediately returned.
 Faster context switch: Context switch time between threads
is lower compared to the process context switch. Process
context switching requires more overhead from the CPU.
 Effective utilization of multiprocessor system: If we have
multiple threads in a single process, then we can schedule
multiple threads on multiple processors. This will make process
execution faster.
 Resource sharing: Resources like code, data, and files can
be shared among all threads within a process. Note: Stacks
and registers can’t be shared among the threads. Each thread
has its own stack and registers.
 Communication: Communication between multiple threads is
easier, as the threads share a common address space. while in
the process we have to follow some specific communication
techniques for communication between the two processes.
 Enhanced throughput of the system: If a process is
divided into multiple threads, and each thread function is
considered as one job, then the number of jobs completed per
unit of time is increased, thus increasing the throughput of the
system.

CPU Scheduling in Operating Systems


Last Updated : 14 Jan, 2025


CPU scheduling is a process used by the operating system to decide


which task or process gets to use the CPU at a particular time. This is
important because a CPU can only handle one task at a time, but there
are usually many tasks that need to be processed. The following are
different purposes of a CPU scheduling time.
 Maximize the CPU utilization
 Minimize the response and waiting time of the process.
What is the Need for a CPU Scheduling
Algorithm?
CPU scheduling is the process of deciding which process will own the
CPU to use while another process is suspended. The main function of
CPU scheduling is to ensure that whenever the CPU remains idle, the
OS has at least selected one of the processes available in the ready-to-
use line.
In Multiprogramming, if the long-term scheduler selects multiple I/O
binding processes then most of the time, the CPU remains idle. The
function of an effective program is to improve resource utilization.
Terminologies Used in CPU Scheduling
 Arrival Time: The time at which the process arrives in the
ready queue.
 Completion Time: The time at which the process completes
its execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion
time and arrival time.
Turn Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turn around
time and burst time.
Waiting Time = Turn Around Time – Burst Time

Things to Take Care While Designing a CPU


Scheduling Algorithm
Different CPU Scheduling algorithms have different structures and
the choice of a particular algorithm depends on a variety of factors.
 CPU Utilization: The main purpose of any CPU algorithm is to
keep the CPU as busy as possible. Theoretically, CPU usage
can range from 0 to 100 but in a real-time system, it varies
from 40 to 90 percent depending on the system load.
 Throughput: The average CPU performance is the number of
processes performed and completed during each unit. This is
called throughput. The output may vary depending on the
length or duration of the processes.
 Turn Round Time: For a particular process, the important
conditions are how long it takes to perform that process. The
time elapsed from the time of process delivery to the time of
completion is known as the conversion time. Conversion time
is the amount of time spent waiting for memory access,
waiting in line, using CPU, and waiting for I / O.
 Waiting Time: The Scheduling algorithm does not affect the
time required to complete the process once it has started
performing. It only affects the waiting time of the process i.e.
the time spent in the waiting process in the ready queue.
 Response Time: In a collaborative system, turn around time
is not the best option. The process may produce something
early and continue to computing the new results while the
previous results are released to the user. Therefore another
method is the time taken in the submission of the application
process until the first response is issued. This measure is
called response time.
Different Types of CPU Scheduling Algorithms
There are mainly two types of scheduling methods:
 Preemptive Scheduling: Preemptive scheduling is used
when a process switches from running state to ready state or
from the waiting state to the ready state.
 Non-Preemptive Scheduling: Non-Preemptive scheduling is
used when a process terminates , or when a process switches
from running state to waiting state.

CPU Scheduling

Please refer Preemptive vs Non-Preemptive Scheduling for details.


CPU Scheduling Algorithms
Let us now learn about these CPU scheduling algorithms in operating
systems one by one:
 FCFS – First Come, First Serve
 SJF – Shortest Job First
 SRTF – Shortest Remaining Time First
 Round Robin
 Priority Scheduling
 HRRN – Highest Response Ratio Next
 Multiple Queue Scheduling
 Multilevel Feedback Queue Scheduling

You might also like