[go: up one dir, main page]

0% found this document useful (0 votes)
173 views114 pages

UNIX Shell Scripting: Y.V.S Prasad

This document discusses UNIX shell scripting. It provides an overview of basic UNIX commands like date, pwd, ls, cat, cp, mv, rm, mkdir and describes how to use filters like head, tail, sort. It also covers text editors like vi, file permissions using chmod, symbolic and hard links, and introduces regular expressions and awk for advanced filtering. The document serves as a tutorial for basic UNIX commands and shell scripting concepts.

Uploaded by

madhu aditya
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
173 views114 pages

UNIX Shell Scripting: Y.V.S Prasad

This document discusses UNIX shell scripting. It provides an overview of basic UNIX commands like date, pwd, ls, cat, cp, mv, rm, mkdir and describes how to use filters like head, tail, sort. It also covers text editors like vi, file permissions using chmod, symbolic and hard links, and introduces regular expressions and awk for advanced filtering. The document serves as a tutorial for basic UNIX commands and shell scripting concepts.

Uploaded by

madhu aditya
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 114

UNIX Shell Scripting

Y.V.S Prasad
An Operating System is the Interface between the
User (software) and the Computer (hardware).

USER OS COMPUTER
UNIX is a multiprogramming
Operating System. It permits
multiple people to run multiple
programs.
 I/O management
 data management
 command execution
 program development tools
 portability
 time sharing
 security
 communications
 accounting
 graphics
 Internet
BASIC COMMANDS
$ clear
$ tput clear
$ date Date, Time and Time
Zone
$ date ‘+%D’ Date in dd/mm/yy
$ date ‘+%T’ Time in hh:mm:ss
$ date ‘+%Z’ Time Zone
$ pwd
$logname
$tty
$uname Name of UNIX
$uname –r Current Release of UNIX
$uname –n Host Name or Domain Name
$uname –a Available Information of
UNIX
$passwd
$who
Col. 1: user names
Col. 2: device names of the terminals
Col. 3, 4, 5: date and time of logging in
Col 6: machine name
$who –Hu u gives more detailed
info. and H with headers
$cal calendar of the current month

$cal 03 calendar for the year 03 AD


$cal 03 2008 calendar for march 2008 AD
$cal 09 1752 11 days have been adjusted to
accommodate the concept of
leap year.
$echo “computer” Displays the word ‘computer’
$echo “computer\c” Displays the word ‘computer”
and the cursor continues …
OTHER FORMATS:
\a bell \t tab
\b backspace \\ back slash
\c no new line \0n ascii character (n is octal)
$ printf “my current shell is %s\n” $SHELL
OTHER FORMATS:
%d decimal
%o octal
%f float
%s string
%30s 30 characters space wide
$ type date
/usr/bin/date
$ man date
$ man pwd
$ man man
MANUAL SECTIONS:
1 User Programs 5. Admin. File Formats
2 Kernel System Calls 6. Games
3 Library Functions 7. Macros
4 Special Files 8. Admin. Commands
$ stty –a Displays all options
$ stty –echo suppresses the echo of input
$ stty echo restores the echo
$ stty –echoe backspace does not remove
the character
$ stty echoe backspace removes the
character
$ stty intr \^c control c is the interrupt key
$ stty eof \^a control a is eof character
$ stty sane restores sanity to the terminal.
COMBINING THE COMMANDS
$ date ; pwd
$ (date ; pwd)
$ (date ; pwd ) > newlist
The combined output of the two commands is sent to
the file “newlist”
$ exit exits from the current shell
$ cat > abc creating a file abc
………….
………….
^d
$ cat abc displaying the contents of abc
$ cat –v abc displays non-printable chars.
$ cat –n abc numbering the lines
$ cat abc def > ghi merges two files abc, def
$ cp abc abc1 copies abc into abc1

$ cp abc abc1 abc2 d1 the last argument must be


a directory

$ cp -i abc abc1 interactive copying

$ cp –R progs newprogs recursive copying


$ rm abc removes a file abc
$ rm –i abc interactive removal
$ rm –r d1 recursive removal
$ rm –R d1 recursive removal
$ rm –f abc forced removal
$ mv abc abc1 renames a file abc to abc1
$ mv d1 d2 renames d1 directory to d2
$ mv a b c d1 moves a b c to d1 directory
$ mv –i abc abc1 interactive renaming
$ mv –R d1 d2 recursive renaming
$ lp abc prints the file abc
$ lp –dlaser abc default printer is laser
$ lp –n3 –m abc prints 3 copies and mails
the user a message
$ lp –t”Chapter1” abc prints the title chapter1
$ lpstat gives the status of print jobs
$ cancel request-id to cancel the request-id
When you log on to the system, UNIX automatically
places us in a directory called the ‘Home Directory.

To know your Home,


$ echo $HOME /home/prasad
~/foo here ~ identifies home
~prasad home of Prasad
$ mkdir d1 create the directory d1
$ mkdir –p d1/d2/d3 create the path d1/d2/d3
$ rmdir d1 removes d1 (d1 empty)
$ rmdir –p d1/d2/d3 removes tree (empty)
$ cd /home/kumar changes to kumar directory
$ cd .. moves to parent directory
$ cd moves to Home
$ ls lists all in the ascending order
$ ls –r lists in the descending order
$ ls –a lists all files including . Files
$ ls –C width wise
$ ls –F * (exe), / (dir), @ (links)
$ ls –i along with i-node nos.
$ ls –l long format
$ ls –t sorts files (modi. Time)
$ ls –s lists file sizes in blocks
$ ls –x multicolumn output
$ ls –R recursive listing
$ ls –xR combined effect of x and R
Every file or directory is associated with 3 types of
people. The User (owner of the file), the Group
(people who are close to the owner and whose
priority comes after the owner and perform a
common task) and Others (who are neither Users
nor Group).
The first column of $ ls –l command would give the
permissions each category holds.
Permission file directory

read ( r ) view, copy, compileview


write (w) modify add or del.
files
Execute (x) run to change
to the dir.
Symbolic Method:
u user + give
g group - remove
o others = absolute give
a all r read
w write
x execute
$ chmod u+x, g-r abc exec. for user, read for group
Absolute Method:
4 read
2 write
1 execute
0 no permission
$ chmod 777 abc read, write, exe. for all
$ umask to know the default perm.
$ umask 0222 suppress these permissions
to create default permissions
Whenever, a file is created, the filename is stored at
one place and the contents of the file are stored at a
different place on the storage media. Both are linked
together by a no. known as i-node no. or index node
no.
When we copy a file, altogether a new file is created,
and the contents of the source file are copied to the
target file. This leads to repetition of same contents
and thus disk space wastage.
Instead, we can create a link to that file, which is
actually nothing but copying the i-node no. of the
file to the new file rather than the contents. This is
called linking.
Linking is of two types.
1. Hard links
2. Soft links
When we create a link within the same file system, it
is a hard link. A soft link spans across the file
systems.
Hard link will have the same i-node no. But soft link
will store the path of the source file and thus has a
different i-node no.
For directories, we can create soft links only. Soft
links are also known as symbolic links.
In the case of hard link, even if the source file is
removed, the contents can be accessed through the
link.
In the case of soft link, if the source file is removed,
the soft link does not point to anything and thus the
contents can not be accessed.
$ ln abc abc1 hard link of abc
$ ln –s abc /var/abc1 soft link of abc
vi stands for visual editor. This was developed by
Bill joy of BSD.
vi has 3 modes:
1. Append or Insert mode (data entry)
2. Command or Escape Mode (editing)
3. Last Line or Ex Mode (file operations)
$ vi new
By default, when we invoke vi, we are in mode 2.
a append
i insert
l move the cursor right
h move the cursor left
k move the cursor up
j move the cursor down
x delete one character
dw delete one word
dd deletes entire line
d$ deletes from current position to the end of line
d0 deletes from current position to the beg. of line
u undo
J join
G go to specific line no.
r replace one character
R replace till esc. is pressed
o open mode after the line
O open mode before the line
yy yanking
pp pasting
Last Line Mode Commands: When we press : from
command or escape mode, it is printed at the last
line. These commands are to be given at that colon.
:wq write and quit
:x write and quit
:q! quit without saving
:s ///g substitute globally
:w write
:w! overwrite
:n next file
:e file edit the file
:! execute a UNIX command
Filters are commands that accept data normally from
the standard input, manipulate it and write results to
the standard output.

1. Simple Filters
2. Filters with Regular Expressions – grep and sed
3. Advanced Filtering using awk
Simple Filters include head, tail, tr, sort, uniq, cut,
paste, pr, comm, diff etc.
$ head emp.lst
Displays first 10 lines of the file, from the beginning.

$ head -15 emp.lst


Displays first 15 lines of the file, from the beginning.
$ tail emp.st
Displays last 10 lines of the file, from the end.
$ tail -15 emp.lst
Displays last 15 lines of the file, from the end.
$ tail +25 emp.lst
Displays lines starting with the line no.25 to the end.
$ tail –c -512 emp.st
Copies last 512 bytes from emp.st
$ tr ‘a’ ‘A’
Translates each ‘a’ with ‘A’ in the input
$ tr ‘ax’ ‘by’
Translates a with b and x with y in the input
$ ls –l | tr –s ‘ ‘
Squeezes multiple occurrences of spaces to one.
$ tr –d “|” < emp.lst
Deletes the character ‘|’ from the file emp.lst
A file (data base file) can be sorted in the ascending
or descending order by sort.
$ sort emp.lst
Sorts in ASCII collating sequence – white space first,
numerals next, uppercase letters and finally lower
case letters.
$ sort –r emp.lst
Sorts in the reverse order
$ sort –n emp.lst
Sorts according to numeric order
$ sort –u emp.lst
Sorts uniquely
$ sort –f emp.lst
Sorts in the insensitive case order
$ sort emp.lst –o emp.lst1
Sorts and stores the output in the file ‘emp.lst1’.
$ sort –c emp.lst
Checks if the file is sorted
$ sort –m emp.lst emp.lst1
Merges two files emp.lst and emp.lst1
$ sort –t”:” emp.lst
Sorts by taking ‘:’ as the delimiter among fields
$ sort –k 2 emp.lst
Sorts on the second field
$ sort –k 3,3 -k 2,2 emp.lst
Sorts according to the third field and the secondary
key is second field.
$ sort –k 5.7, 5.8 emp.lst
Sorts from 7th column of 5th field to the 8th column
of 5th field of emp.lst
Note: Sort considers tab as the default delimiter.
However, as a user we should use ‘:’ as the delimiter.
$ uniq emp.lst (emp.lst must be sorted before)
Displays the lines uniquely
$ uniq –u emp.lst
Displays the lines that are only unique
$ uniq –d emp.lst
Displays the lines that are having duplicates
$ uniq –c emp.lst
Displays the frequency of occurrence of each line
$ cut –c1 emp.lst
Cuts the file vertically basing on character nos.
$ cut –c1-5,8 emp.lst
Displays 1 to 5 characters and the 8th character of
each line
$ cut –f1 emp.lst
Displays first field of the file
$ cut –f1,3 emp.lst
Displays first and third fields.
$ cut –f1-3 emp.lst
Displays first, second and third fields
$ cut –d”:” –f1 emp.lst
Displays the first field by taking “:” as the
delimiter
$ paste emp.lst emp.lst1
Joins two files emp.lst and emp.lst1 with the
tab
as the delimiter.
$ paste –d”:” emp.lst emp.lst1
Joins two files emp.lst and emp.lst1 with “:” as
the delimiter.
$ paste –s emp.lst
Would join all the lines two forma single line
$ pr emp.lst
Prints file by adding suitable headers, footers and
formatted text. Adds five lines of margin at the
Top and five and the Bottom. The header shows
the date and time of lat modification of the file
along with the filename and page number.
$ pr -3 emp.lst
Prints in 3 columns
$ pr –t emp.lst
Suppresses the header and footer
$ pr –d emp.lst
Displays in double line spacing
$ pr –n emp.lst
Lines are numbered
$ pr –o 5 emp.lst
Left margin is 5
$ pr –h “employee file” emp.lst
Header is ‘employee file’.
$ pr +10 emp.lst
Prints from page no. 10
$ pr –l 45 emp.lst
Page length is set to 45
$ pr –l45 emp.lst | lp
$ diff emp.lst emp.lst1
Displays file differences. Suggests changes in
order that the two files are identical.
Append a
Delete d
Change c
$ diff –e emp.lst emp.lst1
This produces a set of instructions only
$ comm emp.lst emp.lst1
Both these files must be sorted. Shows 3 column
output. The first column contains the entries only
available in the first file, the second column
contains the entries only available to the second
file and the third column contains common
entries.
$ comm -1 emp.lst emp.lst1
Suppresses first column in the output.
$ comm -12 emp.lst emp.lst1
Suppresses first and second columns in the
output.

$ comm -123 emp.lst emp.lst1


Suppresses all columns in the output.
GREP stands for Global Regular Expression
Printer.
$ grep options pattern filename(s)
Options:
-i ignore case -c count lines
-v inverse role -l file names only
-n lines are numbered -f patterns in a file
-e multiple patterns -E EREs
Basic Regular Expressions (BRE):
* Zero or more occurrences of the prev.
chr.
. A single Character
.* Any no. of characters or none
[abc] a or b or c
[a-z] any character between a to z
[1-3] any digit between 1 to 3
[^abc] not a or not b or not c
Basic Regular Expressions (BRE):
[^a-zA-Z] non-alphabetic character
abc exact character sequence abc

^abc abc not at the beginning of the line


abc$ abc not at the end of the line
^abc$ abc as the only word in line
^$ lines containing nothing
\ nullify the meaning of meta characters
Extended Regular Expressions (ERE):

ab+c a followed by one or more b’s followed


by c
ab?c a followed by optional b followed by c
abc or ac
a|b either a or b
(a|b)c either ac or bc
Interval Regular Expressions (IRE):
ab {2,4}c a followed by 2,3,4 b’s followed by c
ab{2,}c a followed by at least 2b’s followed by c
ab{2}ca with 2b’s and c

Ex:
grep –i ‘abc’ emp.lst emp.lst1
SED is a multipurpose tool which combines the
work
of several filters.
Ex: sed options ‘address action’ file(s)
Addressing in sed is done in two ways:
1. By one or two line nos.
2. By specifying /pattern/
Line Addressing:
$ sed ‘3q’ emp.lst
Displays first 3 lines of the file and quits from sed.
$ sed –n ‘1,3p’ emp.lst
Displays first 3 lines of the file. (p and n must be
used)
$ sed –n ‘$p’ emp.lst
Displays the last line of the file
$ sed –n ‘1,2p
7,9p’ emp.lst
Displays selective groups of lines
$ sed –n ‘3,$!p’ emp.lst
Do not print the lines from 3 to the end of the file.
Using Multiple Instructions:
$ sed –n –e ‘1,2p’ –e ‘7,9p’ emp.lst
Putting instructions in a file:
$ cat > patfile
1,2p
7,9p
^d
$ sed –n –f patfile emp.lst
Context Addressing:
$ sed –n ‘/director/p’ emp.lst
Displays all the lines that contain ‘director’
$ sed –n ‘/director/, /manager/p’ emp.lst
Displays all the lines from director to manager
$ sed –n ‘1, /director/p’ emp.lst
Line nos. and context addresses can be mixed
$ sed –n ‘/^a/p’ emp.lst
Displays all the lines that start with ‘p’ (regular exp.)
Writing selected lines to a file:
$ sed –n ‘/director/w dlist’ emp.lst
$ sed –n ‘/director/w dlist
/manager/w mlist’ emp.lst
Text Editing:
Inserting i
Appendinga
Changing c
Deleting d
$ sed ‘1i\
abc \
pqr’ emp.lst
$ sed ‘1a\
abc\
pqr’ emp.lst
$ sed ‘1c\
abc’ emp.lst
$ sed ‘/director/d’ emp.lst
SUBSTITUTION:
$ sed ‘s/director/director1/g’ emp.lst
$ sed ‘1,5 s/director/director1/g’ emp.lst
MULTIPLE SUBSITUTIONS:
$ sed ‘s/i/m/g
s/x/y/g’ emp.lst
Named after its authors Aho, Weinberger and
Kernighan, awk, until the advent of Perl, was the most
powerful utility for text manipulation.

Syntax:
awk options ‘selection_crateria {action}’ file(s)

The selection_crateria filters the input and selects lines


for action component to act upon.
Examples:

$awk ‘/director/ { print }’ emplist


Checks for the pattern ‘director’ and prints the entire
line(s). If selection_crateria is missing, the action
applies to all the lines. If action is missing, the entire
line is printed. Either of the two is optional (but not
both), but they must be enclosed within a pair of single
(not double) quotes.
The following formats are equivalent:

$ awk ‘/director/’ emplist


$ awk ‘/director/ {print}’ emplist
$ awk ‘/director/ {print $0}’ emplist

awk uses the special parameter, $0, to indicate the entire


line. It also, identifies the fields by $1, $2, $3, …
$ awk ‘/director/ { print $1, $2}’ emplist
Unlike other Unix filters, awk uses a contiguous
sequence of space and tabs as a single delimiter. If the
delimiter is other this, we have to explicitly express.
$ awk –F”|” ‘/director/ { print $1, $2}’ emplist

Line addressing is allowed in awk with the help of the


built-in variable NR. This prints the lines from 3 to 6.
$ awk –F”|” ‘NR==3, NR==6 {print NR, $1, $2, $3}’
emplist

C-like printf statement is available in awk to format the


output.
$awk –F”|” ‘/director/ { printf “%3d %-20s %d\n”, NR,
$1, $2}’ emplist
Every print or printf statement can be separately
redirected with the > and | symbols. However, make
sure that the filename or command that follows these
symbols is enclosed within double quotes.
$ awk –F”|” ‘/director/ { print $1, $2 | “sort” }’ emplist

$ awk –F”|” ‘/director/ { print $1, $2 > “abc” }’ emplist


Every expression in awk is interpreted either as a string
or a number, and awk makes the necessary conversion
according to context. Awk allows the use of user-defined
variables but without declaring them. Variables are case
sensitive.
Ex: x=”sun”; y=”com
print x y gives suncom
x=”5”; y=6;
print x+y gives 11
Logical Operators:
|| (or), && (and), ! (not)
$ awk ‘$3==”director” || $3==”chairman” {print }’ emplist
Regular Expression Operators:
~ (match), !~ (no match)
$ awk ‘$3 ~ /^a { print }’ emplist
Number Comparison: >, >=, <, <=, ==, !=
Arithmetic Operators: +, -, *, /, %
$ awk ‘$3 > 2000 { printf “%d\n”, $2*0.5 }’ emplist
Built-in Variables:
NR cumulative no. of lines read
FS input field separator
OFS output field separator
NF no. of fields
FILENAME current input file
ARGC no. of command line arguments
ARGV list of arguments
Awk patterns can be put in a file and we can awk to
look for the pattern in that file and execute on the
input file. Here the file is pattern.awk.
$ awk –f pattern.awk emplist
BEGIN & END Sections: BEGIN performs actions
before processing each line and END performs
actions after the last line of the file. BEGIN {action}
and END {action} is the syntax.
$ awk ‘BEGIN {print “welcome”} /director/ {print} END
{print “Bye”}’ emplist
awk reads standard input when filename is omitted.
Arrays:
An array is considered declared the moment it is used
Array elements are initialized to zero or empty string
unless
initialized explicitly. Arrays expand automatically. The
index
can be anything even a string
Ex:
$ awk ‘ BEGIN { print “REPORT” } /director/ {tot[1]=tot[1]+$6 } END {print
$tot[1] }’ emplist
Associative Arrays:
Awk does not treat array indexes as integers, the arrays
are associative, where the information is held as key-
value pairs. The index is the key that is saved internally
as a string. When we set an array element using
mon[1]=”mon”, awk converts the number 1 to a string.
There is no specified order in which the array elements
are stored.
$ nawk ‘BEGIN {print “HOME” “=” ENVIRON [“HOME”]} ’
emp.lst
Functions:
int (x) returns integer value of x
sqrt (x) returns square root of x
length returns the length of a complete line
length (x) returns the length of x
substr (string,m,n) starting from m, n characters as
a string
index (s1,s2) returns the position of s2 in s1
split(string,array,ch) splits the string into an array
using ch as
the delimiter
system (“cmd”) executes operating system commands
and returns exit status
CONTROL FLOW:
if (condition) { statements }
if (condition ) { statements } else {statements}
for (k=1;k<=10;k++)
{
statements
}
for ( k in array)
{
statements
}
while (condition)
{
statements
}
FIND is a one of the power tools of UNIX. It recursively
examines a directory tree to look for files matching
some criteria and then takes some action on the
selected files.

Ex: find path_list selection_criteria action

Path List: Path List can be one or more sub-directories


separated by white space.
SELECTION_CRITERIA:
-inum n Having inode no. n
-type x where x is f (ordinary file),
d (directory), l (sym. Link)
-perm nnn matches permissions nnn
-links n having n links
-user username owned by username
-group gname owned by group name
-size +x[c] if size is > x blocks (c for char)
-mtime –x if modified in < x days
SELECTION_CRITERIA:
-newer fname if modified after fname
-atime +x if accessed in more than x days
-name fname file name
-prune don’t descend directory if
matched
ACTION:
-print prints on the output
-ls executes ls –lids command
-exec cmd executes UNIX command cmd
followed by {}\;
xargs is a command of Unix and most Unix-like operating
systems. It is useful when one wants to pass a large
number of arguments to a command. Arbitrarily long lists
of parameters can't be passed to a command,so xargs will
break the list of arguments into sublists small enough to
be acceptable.
For example, commands like:
rm /path/*rm `find /path -type f`
will fail with an error message of "Argument list too long"
if there are too many files in /path.
find /path -type f -print0 | xargs -0 rm
In this example, find feeds the input of xargs with a long
list of file names. xargs then splits this list into sublists
and calls rm once for every sublist. This is more efficient
than this functionally equivalent version:
find /path -type f -exec rm '{}' \;
which calls rm once for every single file. Note however
that with modern versions of find, the following variant
does the same thing as the xargs version:
find /path -type f -exec rm '{}' +
zipping the files abc, pqr and lmn:
$ zip final.zip abc pqr lmn
recursive zipping:
$ zip –r final.zip d1 (d1 is a directory)
unzipping the zipped files:
$ unzip final.zip
viewing the zipped files:
$ unzip –v final.zip
Zipping a file:
$ gzip abc.txt lmn.txt
Files abc.txt.gz and lmn.txt.gz are created
How much zipping is done?
$ gzip –l abc.txt.gz lmn.txt.gz
Unzipping a file:
$ gzip –d abc.txt.gz lmn.txt.gz
$ gunzip abc.txt.gz lmn.txt.gz
Recursive zipping:
$ gzip –r d1 (d1 is a directory)
Unzipping recursively:
$ gzip –dr d1 or $ gunzip –r d1
A process is the instance of a running program.
A process is said to be born when the program
starts execution and remains alive as long as the
program is active. After execution is complete,
the process is said to die. Each process is
uniquely identified by a unique integer called
the process-id or PID.

$ ps
Displays PID, TTY, TIME (cumulative processor
time that has been consumed since the process
started) and the CMD (process name)
$ ps –f
Displays full listing
$ ps –e or $ ps –A
All processes including user and system processes
$ ps –u user
Displaying the processes of a User
$ ps –a
Processes of all users excluding processes not
associated with terminal.
$ ps –l
Long listing showing memory related information
$ ps –t term
Displays processes running on the terminal
A process can be run in the background. This is
achieved by placing an & at end of the
command.
$ sort emp.lst –o emp.lst &

When a user logs out, the Shell is killed and all


the background processes are also killed. We
can avoid this by using nohup command.

$ nohup sort emp.lst –o emp.lst &


This sends the output to nohup.out file. Even if
the parent is killed, the background process
runs and the result is sent to the file, nohup.out
Processes in UNIX are usually executed with
equal priority. The priority levels can be altered
using nice command. A higher nice value a
lower priority.
Nice priorities normally range from 0 to 39.

$ nice –n 5 wc –l uxmanual &


Nice value is increased by 5 units.

$ ps –o nice shows the nice value.


A Signal is used to communicate the occurrence
of an event to a process. Each Signal is
identified by a number and is designed to
perform a specific function. Signals can be
generated from the keyboard or by the kill
command. Signals are represented by their
symbolic names having the SIG prefix.
If you want to terminate a program, you
normally press the interrupt key. This sends the
process the SIGINT signal (no. 2). The default
action of this signal is to kill the process. A
process may also ignore a signal or execute
some user-defined code written to handle that
signal.
There are two signals that a process cannot
ignore or run user-defined code to handle. They
are SIGKILL and SIGSTOP.

kill command sends a signal, usually with the


intention of killing one or more processes.

$ kill 105
$ kill –s KILL 105 (or) $ kill -9 105
$ kill –l
will display the list of signal names and their
nos.
A job is the name given to a group of process.
Process activity is related to kernel whereas the
job activity is related to shell.
$ wc –c /
Say that this command is taking too long, then
we can suspend this command by pressing
control-z.
[1] + stopped wc –c /
$ bg
forces the command to run in the background
$ jobs
will show the list of background jobs
A background job can be brought to the
foreground by fg command.
$ fg %1
$ fg %wc
similarly,
$ bg %2
$ bg %?perm(string perm)
we can terminate a job with kill command.
$ kill %1
$ kill –s KILL %wc
kills the wc command etc.
AT tells UNIX when to execute a set of
commands.
$ at 14:08
……
…….
control-d
$ at –l
gives the list of at jobs
$ at –r
to remove the list of jobs from the queue.
Other formats of AT are:

$ at 15
$ at 5pm
$ at 3:06pm
$ at noon
$ at now + 1 year
$ 3:08pm + 1 day
$ 15:08 December 18, 2008
$ at 9am tomorrow
BATCH commands are executed as soon as the
system load permits.

$ batch
……..
……..
control-d
job 10411856731.b at Sun Dec 29 13:14:33 2009
CRON is a system process (daemon) that
executes programs at regular intervals. It mostly
dormant, but every minute, it wakes up and
looks into a control file called the crontab file.
Creating a crontab file:
create a file cron.txt with the following 6 fields.
$ vi cron.txt
00-10 17 * 3,6,9 5 wc –c abc
field 1: 00-59 minutes field 4: month
field 2: 1-24 hours field 5: Friday
field 3: 1-31 day field 6: command
$ crontab cron.txt
$ crontab –l
display the contents of crontab file
$ crontab –r
removes the contents of crontab file
$ time sort emp.lst –o emp.lst
displays 3 times
real: time elapsed from the invocation of the
command until its termination.
user: time spent by the program in executing
itself.
sys: time spent by the kernel
TRAP traps the signals and executes the
commands. It is normally put at the beginning
of the shell script.
$ trap ‘command_list’ signal_list

When a script is sent any of the signals in


signal_list, trap executes the commands in the
command_list. The signal list can contain the
integer values or names (without the SIG
prefix) of one or more signals – the ones which
you use with the kill command. Instead of
using 2 15 to represent the signal list, you can
also use INT TERM etc.
$ trap ‘Pressed control c or control z ; exit’ INT TERM
while true
do
sleep 60
done
You may ignore the signal. This can be achieved by,
$ trap ‘ ‘ 1 2 15
That is you should put a null command list.
$ trap –
To reset the signal to their defaults
What is a Shell?
A Shell is the user interface to the
Unix Operating System (Kernel). It
takes the input from the user and
interprets to the Operating System
and conveys the output from the
Operating System back to the user.
SHELL PROGRAMMING

User Shell Kernel


SHELL PROGRAMMING
Unix is one of the first operating
systems to make the user interface
independent of the operating
system. Even though there is only
one Kernel running on the system,
there can be several Shells in
action – one for each user who is
logged in.
SHELL PROGRAMMING
Popular Shells:
1. Bourne Shell (sh)
2. Korn Shell (ksh)
3. C Shell (csh)
4. Bourne Again Shell (bash)
5. Tenex C Shell (tcsh)
6. Zee Shell (zsh)

You might also like