[go: up one dir, main page]

100% found this document useful (1 vote)
70 views63 pages

PPT-Unit-3-Process Management

ppt

Uploaded by

catstudysss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
70 views63 pages

PPT-Unit-3-Process Management

ppt

Uploaded by

catstudysss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Unit-3 Process Management

Chapter Outcomes:
• Explain the functions carried out in the given process state.
• Describe the function of the given component of process stack in PCB.
• Explain characteristics of the given multithreading model.
• Describe method of executing the given process command with example

Learning Objectives:
• To understand Concepts of Process, its States and PCB
• To learn Process Management and Scheduling
• To study Queues and Schedulers with their Types
• To learn Concept of InterProcess Communication (IPC)
• To study Basic Concepts of Threads and Multithreading
• To understand various Process Execution Commands like ps, wait, kill etc.
Process
• A process is defined as, "an entity which represents the basic unit of work to be
implemented in the system".
• A process is defined as, "a program under execution, which competes for the CPU time and
other resources.“
• A process is a program in execution. Process is also called as job, task and unit of work.
• The execution of a process must progress in a sequential fashion. That is, at any time, at
most one instruction is executed on behalf of the process.
• A process is an instance of an executing program, including the current values of the
program counter, registers and variables.
• Logically each process has its separate virtual CPU. In actual, the real CPU switches from
one process to another.
• A process is an activity and it has a program, input, output and a state.
Process in Memory
• A process is defined as, "a program under execution, which
competes for the CPU time and other resources."
Each process has following sections:
• A Text section that contains the program code.
• A Data section that contains global and static variables.
• The heap is used for dynamic memory allocation, and is managed
via calls to new, delete, malloc, free, etc.
• The stack is used for local variables. A process stack which
contains the temporary data (such as subroutine parameters,
return addresses, and temporary variables). Space on the stack is
reserved for local variables when they are declared (at function
entrance or elsewhere, depending on the language), and the
space is freed up when the variables go out of scope.
• A program counter that contains the contents of processor’s
registers.
Difference between Program and Process
Program Process
A program is a series of instructions to perform Process is a program in execution.
a particular task.
Program is given as a set of process. In some Process is a part of a program. Process is the
cases we may divide a problem into number of part where logic of that particular program
parts. At these times we write a separate logic exists
for each part known as process
It is stored in secondary storage. Process is stored in memory.

Program is set of instructions to be executed by Process is a program in execution.


processor.
Program is static entity as it made up of program Process is dynamic entity.
statements.
Program occupy fixed place in storage or main Process changes its state during execution
memory.
Process State
• In a multiprogramming system, many
processes are executed by the
operating system. But, at any instant
of time only one process executes on
the CPU. Other processes wait for
their turn.
• The current activity of a process is
known as its state. As a process
executes, it changes state. The
process state is an indicator of the
nature of the current activity in a
process.
Process State
New State: A process that has just been created but has not yet been admitted to the pool of
execution processes by the operating system. Every new operation which is requested to the
system is known as the new born process.

Ready State: When the process is ready to execute but it is waiting for the cpu to execute then
this is called as the ready state. After the completion of the input and outputs the process will
be on ready state means the process will wait for the processor to execute.

Running State: The process that is currently being executed. When the process is running
under the cpu, or when the program is executed by the cpu, then this is called as the running
state process and when a process is running then this will also provide us some outputs on the
screen.
Process State
Waiting or Blocked: A process that cannot execute until some event occurs or an I/O
completion. When a process is waiting for some input and output operations then this is called
as the waiting state. And in this state process is not under the execution instead the process is
stored out of memory and when the user will provide the input then this will again be on
ready state.

Terminated State: After the completion of the process, the process will be automatically
terminated by the CPU, so this is also called as the terminated state of the process. After
executing the whole process the processor will also de-allocate the memory which is allocated
to the process. So this is called as the terminated process.
Process State
Types of Tables maintain by Operating Systems
Memory Tables are used to keep track of both main and secondary memory. Some of main
memory is reserved for use by the operating system; the remainder is available for use by
processes.

I/O Tables are used by the operating systems to manage the I/O devices and channels of the
computer system. At any given time, an I/O device may be available or assigned to a particular
process.

File Tables provide information about the existence of files, their location on secondary
memory, their current status another attributes.

Process Tables are used to manage processes a process must include a program or set of
programs to be executed.
Process Control Block (PCB)
• Each process is represented in the
operating system by a Process Control Block
(PCB) also called as Task Control Block
(TCB).
• When a process is created, operating
system creates a corresponding PCB and
released whenever the process terminates.
• A PCB stores descriptive information
pertaining to a process, such as its state,
program counter, memory management
information, information about its
scheduling, allocated resources, accounting
information, etc. that is required to control
and manage a particular process.
Process Control Block (PCB)
1. Process Number: Each process is identified by its process number, called Process
Identification Number (PID). Every process has a unique process-id through which it is
identified. The process-id is provided by the OS. The process id of two processes could not be
same because process-id is always unique.

2. Priority: Each process is assigned a certain level of priority that corresponds to the relative
importance of the event that it services process priority is the preference of the one process
over other process for execution. Priority may be given by the user/system manager or it may
be given internally by OS. This field stores the priority of a particular process.

3. Process State: This information is about the current state of the process. The state may be
new, ready, running, and waiting, halted, and so on.

4. Program Counter: The counter indicates the address of the next instruction to be executed
for this process.
Process Control Block (PCB)
5. CPU Registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-purpose
registers, plus any condition-code information. Along with the program counter, this state
information must be saved when an interrupt occurs, to allow the process to be continued
correctly afterward.

6. CPU Scheduling Information: This information includes a process priority, pointers to


scheduling queues, and any other scheduling parameters.

7. Memory Management Information: This information may include such information as the
value of the base and limit registers, the page tables, or the segment tables, depending on the
memory system used by the operating system.

8. Accounting Information: This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.
Process Control Block (PCB)
9. I/O Status Information: This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.

10. File Management: It includes information about all open files, access rights etc.

11. Pointer: Pointer points to another process control block. Pointer is used for maintaining
the scheduling list.
Process Creation
When a new process is to be added to those currently being managed, the operating system
builds the data structures that are used to manage the process and allocates address space in
main memory to the process. This is the creation of a new process.
Parent process create children processes, which, in turn create other processes, forming a tree
of processes
Resource sharing
• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
Execution
• Parent and children execute concurrently
• Parent waits until children terminate
Process Creation
UNIX examples
• fork system call creates new process
• exec system call used after a fork to
replace the process’ memory space
with a new program
System call CreateProcess() in Windows
and fork() in Unix which tells the
operating system to create a new
process.
Process Termination
Depending upon the condition, a process may be terminated either normally or forcibly by
some another process.

Normal termination occurs when the process completes its task (operation) and invokes an
appropriate system call ExitProcess( ) in Windows and exit( ) in Unix to tell the operating
system that it is finished.

A process may cause abnormal termination of some another process. For this, the process
invokes an appropriate system call TerminateProcess( ) in Windows and kill( ) in Unix that tells
the operating system to kill some other process.
Process Scheduling
• As we know that we can perform many programs at a time on the computer, but there is a
single CPU. So for running all the programs concurrently or simultaneously, then we use the
scheduling.
• Processes are the small programs those are executed by the user according to their
request. CPU executes all the process according to some rules or some schedule.
• Scheduling is that in which each process have some amount of time of CPU. Scheduling
provides time of CPU to the each process.
• When two or more processes compete for the CPU at the same time then choice has to be
made which process to allocate the CPU next. This procedure of determining the next
process to be executed on the CPU is called process scheduling and the module of
operating system that makes this decision is called the scheduler.
• Processor scheduling is one of the primary function of a multiprogramming operating
system.
Process Scheduling
Scheduling Queue
• For a uniprocessor system, there will never be
more than one running process. If there are more
than one processes, the rest will have to wait until
the CPU is free and can be rescheduled.
• The processes, which are ready and waiting to
execute, are kept on a list called the ready queue.
• The list is generally a linked list. A ready queue
header will contain pointers to the first and last
PCB’s in the list. Each PCB has a pointer field
which points to the next process in the ready
queue.
• The list of processes waiting for a particular I/O
device is called a device queue. Each device has
its own device queue.
Scheduling Queue
• A process enters the system from
the outside world and is put in the
ready queue. It waits in the ready
queue until it is selected for the CPU.
After running on the CPU, its waits
for an I/O operation by moving to an
I/O queue.
• Eventually it is served by the I/O
device and returns to the ready
queue. A process continues this CPU,
I/O cycle until it finishes and then it
exits from the system.
Schedulers
• Schedulers are special system software’s
which handles process scheduling in
various ways. Their main task is to select
the jobs to be submitted into the system
and to decide which process to run.
• In other words, the job of process
scheduling is done by a software routine
(module) called as scheduler.

Schedulers are of three types


• Long Term Scheduler
• Short Term Scheduler
• Medium Term Scheduler.
Types of Schedulers
• Long Term Scheduler-(performance): Makes a decision about how many processes
should be made to stay in the ready state, this decides the degree of multiprogramming.
Once a decision is taken it lasts for a long time hence called long term scheduler.

• Short Term Scheduler-(Context switching time): Short term scheduler will decide which
process to be executed next and then it will call dispatcher. A dispatcher is a software
that moves process from ready to run and vice versa. In other words, it is context
switching.

• Medium Term Scheduler-(Swapping time): Suspension decision is taken by medium term


scheduler. Medium term scheduler is used for swapping that is moving the process from
main memory to secondary and vice versa.
Difference
Short Term Scheduler Long Term Scheduler Medium Term Scheduler
Scheduler which selects the The scheduler which picks up The medium term scheduler
jobs or processes which are job from pool and loads into is that it removes the process
ready to execute from the main memory for execution from main memory and again
ready queue and allocate the is called long term scheduler reloads afterwards when
CPU to one of them is called required.
short term scheduler.
It is CPU scheduler It is job scheduler It is a process swapping
scheduler
Frequency of execution is Frequency of execution is Execution frequency is
high, (in milliseconds) low, (in few minutes). medium.
Speed is very fast Speed is lesser than short Medium term scheduler is
term scheduler called whenever required.
Difference
Short Term Scheduler Long Term Scheduler Medium Term Scheduler
It deals with CPU. It deals with main memory It deals with main memory
for loading process. for removing processes and
reloading whenever
required.
It provides lesser control over It controls the degree of It reduces the degree of
degree of multiprogramming multiprogramming. multiprogramming.
Minimal in time sharing Absent or minimal in time Time sharing system use
system sharing system medium term scheduler.
Context Switch
• When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process. This task is known as a context switch.
• CPU switching from one process to another process is called a context switch.
• Context switch times are highly dependent on hardware support. Its speed varies from
machine to machine, depending on the memory speed, the number of registers that must
be copied and the existence of special instructions.
• This enables multiple processes to share a single CPU. The context switch is an essential
feature of a multitasking operating system.
• When the process is switched, the following information is stored:
• Program Counter
• Scheduling Information
• Base and limit register value
• Currently used register
• Changed State
• I/O State
• Accounting
Context Switch
Inter-Process Communication (IPC)
• Inter-process communication (IPC) is a set of programming interfaces that allow a
programmer to coordinate activities among different program processes that can run
concurrently in an operating system.
• The Inter-process Communication (IPC) is a set of techniques for the exchange of data
among multiple processes.
• IPC is required in all multiprocessing systems, but it is not generally supported by single-
process operating systems such as DOS, OS/2 and MS-Windows etc.
• IPC is particularly useful in a distributed environment where the communicating processes
may reside on different computers connected with a network. For example, chat program
used on the World Wide Web. IPC is best provided by a message passing system.
Inter-Process Communication (IPC)
Two fundamental models allows in inter-
process communication:

(a) Message Passing Model: In message


passing model the data or information is
exchanged in the form of messages.

(b) Shared Memory Model: Two processes


exchange data or information through sharing
region. They can read and write data from and
to this region.
Message Passing
Message Passing:
In this model, communication takes place by
exchanging messages between cooperating processes.
It allows processes to communicate and synchronize
their action without sharing the same address space. It
is particularly useful in a distributed environment when
communication process may reside on a different
computer connected by a network. Communication
requires sending and receiving messages through the
kernel. The processes that want to communicate with
each other must have a communication link between
them. Between each pair of processes exactly one
communication link.
Shared Memory
Shared memory:
In this a region of the memory residing in an address space
of a process creating a shared memory segment can be
accessed by all processes who want to communicate with
other processes. All the processes using the shared memory
segment should attach to the address space of the shared
memory. All the processes can exchange information by
reading and/or writing data in shared memory segment.
The form of data and location are determined by these
processes who want to communicate with each other.
These processes are not under the control of the operating
system. The processes are also responsible for ensuring that
they are not writing to the same location simultaneously.
After establishing shared memory segment, all accesses to
the shared memory segment are treated as routine
memory access and without assistance of kernel.
Direct Communication
• Processes that want to communicate must have a way to refer to each other. They can use
either direct or indirect communication.
• With direct communication, each process that wants to communicate must explicitly name
the recipient or sender of the communication.
• (i) send (A, message): Send a message to process A.
• (ii) receive (B, message): Receive a message from process B.

Properties of Direct communication:


• A link is established automatically between every pair of processes that want to
communicate. The processes need to know only each other's identity to communicate.
• A link is associated with exactly two processes.
• Exactly one link exists between each pair of processes.
Indirect Communication
• With indirect communication, the message is sent to and receives from mailboxes or ports.
• Mailbox where messages can be placed by processes and from which messages can be
removed.
• Each mailbox has a unique identification. Two processes can communicate only if they
share a mailbox. The send and receive primitives are defined as follows:
• (i) send (A, message): Send a message to mailbox A.
• (ii) receive (A, message): Receive a message from mailbox A.

Properties of Indirect Communication


• A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
• A link may be associated with more than two processes.
• Between each pair of communicating processes, there may be a number of different links,
with each link corresponding to one mailbox.
Synchronization
Communication between processes takes place by calls to send and receive primitives. There
are different design options for implementing each primitive.
Message passing may be blocking or non-blocking - also known as synchronous and
asynchronous.
• Blocking Send: The sending process is blocked until the message is received by the
receiving process or by the mailbox
• Non-blocking Send: The sending process sends the message and resumes operations.
• Blocking Receives: The receive blocks until a message is available.
• Non-blocking Receive: The receive retrieves either a valid message or a null.
Buffering
When the communication is direct or indirect, messages exchanged by communicating
processes reside in a temporary queue. Such a queue can be implemented in three ways.
• Zero Capacity: The queue has maximum length 0, thus link cannot have any message
waiting in it. In this case the sender must block until the recipient receives the message.
• Bounded Capacity: The queue has finite length n, thus at most n messages can reside in it.
If the queue is not full when a new message is sent the latter is placed in the queue. And
the sender can continue execution without waiting. The link has a finite capacity, however.
If the link is full, the sender must block until space is available in the queue.
• Unbounded Capacity: The queue has potentially infinite length, thus any number of
messages can wait in it. The sender never blocks.
Critical Section Problem
• Each process contains two sections. One is critical section where a process may need to
access common variable or objects and other is remaining section containing instructions
for processing of sharable objects or local objects of the process. Each process must
request for permission to enter inside its critical section. The section of code implementing
this request is the entry section. In entry section if a process gets permission to enter into
the critical section then it works with common data. At this time all other processes are in
waiting state for the same data.
• The critical section is followed by an exit section. Once the process completes its task, it
releases the common data in exit section. Then the remaining code placed in the
remainder section is executed by the process.
• Two processes cannot execute their critical sections at the same time. The critical section
problem is to design a protocol that the processes can use to cooperate i.e. allowing entry
to only one process at a time inside the critical section. Before entering into the critical
section each process must request for permission to entry inside critical section.
Critical Section Problem
Semaphore is a synchronization tool. A semaphore S is an integer variable which is initialized and
accessed by only two standard operations: wait () and signal ().All the modifications to the integer value
of semaphore in wait () and signal () operations can be done only by one process at a time.
Working of semaphore to solve synchronization problem:- Consider two concurrently running processes
P1 and P2.P1 contains statement S1 and P2 contains statement S2.When we want to execute statement
S2 only after execution of statement S1, then we can implement it by sharing a common semaphore
synch between two processes. Semaphore synch is initialized to 0.to execute the sequence modify code
for process P1 and P2.
Process P1 contains:
S1;
signal (synch);
Process P2 contains:
wait (synch);
S2;
Critical Section Problem
As synch is initialized to 0, Process P2 will wait and process P1 will execute. Once process P1 completes
execution of statement S1, it performs signal () operation that increments synch value. Then wait ()
operation checks the incremented value and starts execution of statement S2 from Process P2.
Threads
• A thread, sometimes called a Light Weight Process (LWP), is a basic unit of CPU utilization;
it comprises a thread ID, a program counter, a register set and a stack.
• A thread is defined, "as a unit of concurrency within a process and had access to the entire
code and data parts of the process". Thus, thread of the same process can share their code
and data with one another.
• It shares with other threads belonging to the same process its code section, data section,
and other operating-system resources, such as open files and signals.
• A word processor may have a thread for displaying graphics, another thread for reading
keystrokes from the user and a third thread for performing spelling and grammar checking
in the background etc. Threads play a vital role in Remote Procedure Call (RPC) systems
Single and Multithreaded Process
Thread
Advantages of Threads:
• Threads improve the performance (throughput, computational speed, responsiveness, or
some combination) of a program.
• Concurrent operations can be achieving using threads within a process.
• Multiple threads are useful in a multiprocessor system where threads run concurrently on
separate processors.
• Multiple threads also improve program performance on single-processor systems by
permitting the overlap of input and output or other slow operations with computational
operations.
• Thread minimizes context switching time.
• A process with multiple threads makes a great server for example printer server.
• Because threads can share common data, they do not need to use inter-process
communication.
• Context switching are fast when working with threads
Difference between Process and Thread
Difference between Process and Thread
User Thread
• A user-level thread is a thread within a process which the OS does not know about.

• In a user-level thread approach the cost of a context switch between threads less since the
operating system itself does not need to be involved–no extra system calls are required.

• A user-level thread is represented by a program counter; registers, stack, and small thread
control block (TCB).

• Programmers typically use a thread library to simplify management of threads within a


process.

• Creating a new thread, switching between threads, and synchronizing threads are done via
function calls into the library. This provides an interface for creating and stopping threads,
as well as control over how they are scheduled.
Kernel Thread
• In systems that use kernel-level threads, the operating system itself is aware of each
individual thread.

• Kernel threads are supported and managed directly by the operating system.

• A context switches between kernel threads belonging to the same process requires only
the registers, program counter, and stack to be changed; the overall memory management
information does not need to be switched since both of the threads share the same
address space. Thus context switching between two kernel threads is slightly faster than
switching between two processes.

• Kernel threads can be expensive because system calls are required to switch between
threads. Also, since the operating system is responsible for scheduling the threads, the
application does not have any control over how its threads are managed.
Thread Design Space
Difference between User Thread and Kernel Thread
User Level Thread Kernel Level Thread
User thread are implemented by users Kernel threads are implemented by
Operating System.
Operating system doesn’t recognized use Kernel threads are recognized by Operating
level threads system
User level threads are faster to create and Kernel level threads are slower to create
manage. and manage.
Implementation of user threads are easy. Implementation of kernel thread is
complicated.
Context switch time is less Context switch time is more
Difference between User Thread and Kernel Thread
User Level Thread Kernel Level Thread
If one user level thread perform blocking If one kernel thread perform blocking
operation then entire process will be operation then another thread can
blocked continue execution
User level thread can run on any operating Kernel level threads are specific to the
system operating system.
Example: Java thread, POSIX threads Example: Window Solaris
Multithreading Models
• One-to-One
• Many-to-One
• Many-to-Many
One-to-One Model
• The one-to-one model maps each user thread to a kernel thread.
• It provides more concurrency than the many-to-one model by allowing another thread to
run when a thread makes a blocking system call; it also allows multiple threads to run in
parallel on multiprocessors.
• Windows NT, Windows 2000 and OS/2 implement the one-to-one model.
One-to-One Model
Advantages of One-to-one Model:
• More concurrency because of multiple threads can run in parallel on multiple CPUs.
• Multiple threads can run parallel
• Less complication in the processing.

Disadvantages of One-to-one Model:


• Thread creation involves Light Weight Process creation.
• Every time with user’s thread, kernel thread is created.
• Limiting the number of total threads
• Kernel thread is an overhead.
• It reduces the performance of system
Many-to-One Model
• The many-to-one model maps many user level threads to one kernel thread. Thread
management is done in user space, so it is efficient, but the entire process will block if a
thread makes a blocking system call.
• Only one thread can access the kernel at a time. Multiple threads are unable to run in
parallel on multiprocessors.
• Green threads a thread library available for solaris 2- uses this model.
Many-to-One Model
Advantages of Many-to-One Model:
• Totally portable.
• Easy to do with few systems dependencies.
• Mainly used in language systems, portable libraries.
• Efficient system in terms of performance.
• One kernel thread controls multiple user threads.

Disadvantages of Many-to-One Model:


• Cannot take advantage of parallelism.
• One block call blocks all user threads.
Many-to-Many Model
• The many-to-many model multiplexes many user level threads to a smaller or equal
number of kernel threads.
• The number of kernel threads may be specific to either a particular application or a
particular machine.
• This model allows developer to create as many threads. Concurrency is not gained because
the kernel can schedule only one thread at a time.
• The one-to-one model allows for greater concurrency, but the developer has to be careful
not to create too many threads within an application.
• Solaris 2, IRIX, HP-UX and Tru64 UNIX support this model.
Many-to-Many Model
Advantages of Many-to-Many Model:
• Many threads can be created as per user’s requirement.
• Multiple kernel or equal to user threads can be created.

Disadvantages of Many-to-Many Model:


• True concurrency cannot be achieved.
• Multiple threads of kernel is an overhead for operating system.
• Performance is less.
Ps command
When a program runs on the system, it’s referred to as a process. Linux is a multitasking
operating system, which means that more than one process can be active at once. To know the
status of process ps command is used. The ps command is used to display the characteristics of
a process.
Syntax: ps [option]
Example: By default the ps command shows only the processes that belong to the current user
and that are running on the current terminal.
$ ps
PID TTY TIME cmd
30 01 0:03 sh
56 01 0:00 ps
$_
Each line has shown the PID, the terminal with which the process is associated, the cumulative
processor time that has been removed since the process has been started and the process
name.
Ps command

Example:
$ ps –f
UID PID PPID C STIME TTY TIME CMD
logon 1610 1603 0 14:09 pts/0 00:00:00 bash
logon 1715 1610 0 14:21 pts/0 00:00:00 ps -f
Ps command
$ ps –l
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 S 1000 1610 1603 8 80 0 - 1858 wait pts/0 00:00:00 bash
0 R 1000 1665 1610 0 80 0 - 1193 - pts/0 00:00:00 ps

Attributes of –l options:
F: System flags assigned to the process by the kernel.
S: The state of the process (O = running on processor; S = sleeping; I=idle process; X: process
waiting for memory; R = runnable, waiting to run; Z = zombie, process terminated but parent
not available; T = process stopped).
UID: The user responsible for launching the process.
PID: The process ID of the process.
PPID: The PID of the parent process (if a process is started by another process).
Ps command
Attributes of –l options:
C: Processor utilization over the lifetime of the process.
PRI: PRI shows the priority with which the process is running. (higher numbers mean lower
priority).
NI: The nice value, which is used for determining priorities.
ADDR: ADDR shows the memory or disk address of the process.
SZ: The size of process is memory is indicated by the SZ column.
WCHAN: When process is executed, it may have to wait for system resources which are not yet
available. This indication is shown under WCHAN for running processes this column is blank.
TTY: The terminal device from which the process was launched.
TIME: The cumulative CPU time required to run the process.
CMD: The name of the program that was started.
Wait Command
• wait is a built-in shell command which waits for a given process to complete, and returns
its exit status. wait waits for the process identified by process ID pid (or the job specified by
job ID jobid), and reports its termination status.
• If an ID is not given, wait waits for all currently active child processes, and the return status
is zero. If the ID is a job specification, wait waits for all processes in the job's pipeline.
Syntax: wait pid
Example: wait 2112
• Wait for process 2112 to terminate, and return its exit status.
Sleep Command
• The sleep command is used to delay for a specified amount of time.
• The sleep command pauses for an amount of time defined by NUMBER. SUFFIX may be "s"
for seconds (the default), "m" for minutes, "h" for hours, or "d" for days.
Syntax: sleep NUMBER[suffix]
Example: Sleep 10
Delay for 10 seconds.
Exit command
• The exit command terminates a script, just as in a C program.
• It can also return a value, which is available to the script's parent process. Issuing the exit
command at the shell prompt will cause the shell to exit.
Syntax: exit
Thank You

Vijay Patil
Department of Computer Engineering (NBA Accredited)
Vidyalankar Polytechnic
Vidyalankar College Marg, Wadala(E), Mumbai 400 037
E-mail: vijay.patil@vpt.edu.in 62
63

You might also like