CMP515 Linux Administration R
CMP515 Linux Administration R
Chavan
Maharashtra Linux
Open University Administration
Linux Administration
1. Prof.Vaibhav Dabhade Mr.Kunal Ugale Ms. Monali R. Borade Dr. Pramod Khandare
Assistant Professor Lead Engineer,Fidelity Academic Co-ordinator Director
Dept. of Computer National Information School of Computer School of Computer
Engineering, METs Services, Pune Science, Y.C.M. Open Science, Y.C.M. Open
BKC IOE, Nashik University, Nashik University, Nashik
2. Prof. Tushar Kute
Assistant Professor
Researcher, Computer science
MITU Skillologics, Pune
3. Prof. Ankur Shukla
Assistant Professor
Fergusson College, Pune
Production
Course Objectives:
To learn and get insights of the Linux operating system.
To understand the duties of a system administrator.
To learn the installation of Linux installation process.
To learn about the Linux command line and Linux software installation process.
To learn the techniques of Linux administration.
To understand the TCP/IP networking in Linux with files systems.
To learn the DHCP and DNS configuration on Linux.
To understand the Microsoft network, mail server and web servers with iptables.
Learning Outcome:
Student will be able to-
Get detailed insights of the Linux operating system.
Understand the duties of a system administrator.
Learn the detailed installation of Linux installation process.
Use the Linux command line and learn Linux software installation process.
Understand and use the techniques of Linux administration.
Learn the TCP/IP networking in Linux with file systems.
Set up DHCP and DNS configuration on Linux.
Able to handle Microsoft network, mail server and web servers with iptables.
Reference Books:
2. Red hat Linux Networking and System Administration, Terry Collings and Kurt Wall, wiley pub.
Overview of Linux
The Linux is basically divided into three major components: the kernel, the environment UI, and
the file structure. The kernel is the central system program that runs programs and manages
hardware devices, like disks and printers. The environment provides an interface for the user
maybe command line or GUI based. It gets commands from the user and sends those commands
to the Linux kernel for final execution. The file structure manages the way files are stored on a
storage device, such as a disk. The files are organized into directories. Each directory may contain
any number of sub-directories, each holding files. Together, the kernel, the environment, and the
file structure form the general operating system structure. With the help of these three, we can run
programs, manage files, and interact with the computer system.
An environment provides an interface between the kernel and system user. This interface
interprets commands entered by the user and sends them to the Linux kernel. Linux provides
various kinds of environments: desktops, window managers, and command line shells. Each user
on a Linux system has his or her own system user interface. Users can adapt their environments
as per their own special requirements, whether they be command line shells, window managers,
or desktop. So, for the user, the operating system functions more as an operating environment,
which the user can have the control with.
In Linux, files are organized into directories, just like in Windows OS. The entire Linux file
system is one big interconnected set of directories, each containing files. Some directories are
standard directories reserved for the operating system‘s use. We can create our own directories
for our own files, as well as easily move files from one directory to another directory. We can
even move entire directories and share directories and files with other users on our system. With
the help of Linux, we can also fit permissions on directories and files, allowing others to access
them or restricting access to ourselves only. The directories of each user are, ultimately connected
to the directories of other users. The directories are configured into a hierarchical tree structure
pattern, beginning with an initial / root directory. The remaining directories are ultimately derived
from this first root directory. With many kinds of interfaces, Linux now has a completely
integrated Graphical User Interface. We can perform all of our Linux operations entirely from
any one these interfaces. Now all Linux desktops are fully operational supporting drag-and-drop
operations, enabling us to drag icons to our desktop and to set up our own menus on an
Applications panel. These rely on an underlying X Window System, which means as long as they
are both installed on our system; applications from one can run on the other desktop. The
desktops are particularly helpful for documentation, news, and software you can download for
those desktops. They can run any X Window System program, as well as any cursor-based
program also, which were designed to work in a shell environment. A great many applications are
written just for those desktops and included with our distributions. Linux desktops are now
having complete sets of Internet tools, along with editors and graphics, multimedia, and system
applications.
Open Source Software
Linux was developed as a collaborative open source development over the Internet, so no
company or institution controls the Linux. Software developed for Linux also reflects this
background. The development often takes place when Linux users decide to work on a project
altogether. The software is posted at an Internet site like sourceforge, and any Linux user can then
access the site and download the software. Linux software development has always operated in an
Internet environment and is global in scope, enlisting programmers from around the world. The
only thing you need to start a Linux-based software project is a website.
Most of Linux software is developed as open source software a.ka. FOSS (Free and Open Source
Software). It means that the source code for a software application is freely distributed along with
the application. Programmers over the world can make their own contributions to a software
package‘s development, they also modifies and corrects the source code. Linux is the open source
operating system as well. The source code of Linux is included in all its distributions and is freely
available on the Internet for downloading. Many major software development efforts are also
open source projects, as are the KDE, GNOME desktops, Cinnamon, LXDE, Mate Desktops,
LibreOffice along with most of their applications. The Firefox and Google Chrome web browser
package has also become open source, with its source code freely available on web. The
LibreOffice office suite supported by The Document Foundation is an open source project based
on the OpenOffice office suite. Many of the open source applications that run on Linux have
located their websites at SourceForge (sourceforge.net), which is a hosting website designed
generally to support open source software projects.
Open source software is protected by many public licenses. These forbid commercial companies
from taking control of open source software by adding a some modifications of their own,
copyrighting those alteration, and selling the software as their own product. The most popular
public license is the GNU GPL provided by the Free Software Foundation (FSF). Linux is
distributed under this license. The GNU GPL retains the copyright, freely licensing the software
with the requirement that the software and any modifications made to it forever be freely
available. Other public licenses are also been created to assist the demands of different kinds of
open source projects. The GNU lesser general public license (LGPL) lets commercial
applications to use the GNU licensed software libraries. The qt public license (QPL) lets open
source developer‘s use the Qt libraries essential to the KDE desktop.
Linux Distributions
Although there is only one standard version of Linux, there are actually several different
distributions available. Different organizations, institutions and groups have packaged Linux and
Linux software in slightly different ways. Each organization or group releases the Linux package,
usually on a DVD-ROM of in iso package format. Later releases may add updated versions of
programs or new software. Some of the more popular distributions are Ubuntu, Mint, OpenSUSE,
Fedora, and Debian. Linux kernel is centrally distributed through kernel.org repository. All of
these distributions use this same kernel, although it may be configured separately.
Linux has spread with a great variety of distributions. Many of them aim to provide a
comprehensive solution providing support for any and all kind of tasks. These include
distributions like OpenSUSE, Red Hat, Debian, Mint and Ubuntu. Some are variations on other
distributions, like Centos, Fedora which are based on Red Hat Enterprise Linux, and Ubuntu, Kali
Linux, Mint which derives from Debian Linux. Others distributions have been developed for
more specialized tasks or to support certain characteristics. The distributions like Debian provide
cutting edge developments projects. Some distributions provide more commercial versions,
usually bundled with commercial applications such as databases or secure servers. Certain
companies like Red Hat and Novell provide a commercial distribution that agrees to a supported
free distribution. The free distribution is used to develop new characteristics, like the Fedora
Project for Red Hat.
Currently, the website https://distrowatch.com lists numerous Linux distributions. It contains the
detailed information about all the Linux distributions, their software and utilities available under
FOSS licenses.
Linux Package Management
One of the things that sets Linux apart from other operating systems is the way of software‘s
installation and management. Traditionally when we wanted to install software on the Windows
operating system we would searches the software on internet, download the software, and install
it. These are steps that the end user has to perform sequentially.
In order to install software on a Linux system we use the package manager that comes with every
distribution. For installation a new piece of software we search for it and install it from the
operating system itself. The package manager of distribution takes care of downloading the
desired software with any required dependencies and then installs all of the components in the
system. Package managers not only control applications, but they can also manage the operating
system itself. A package manager of Linux can updates and upgrades the system and all of its
installed applications to latest available versions.
Software and applications are bundled into packages and Linux distributions are categorized by
these package types. The three basic types of packages are Debian (.deb), RedHat Package
Manager (.RPM), and other distributions.
Linux Mint
Linux Mint is one more popular distribution based on Ubuntu. Linux Mint started out simply
being Ubuntu with pre-installed multimedia codecs and proprietary drivers. However, it has since
grown and is a very popular alternative available to Ubuntu. According to distrowatch.com,
Linux mint is among top five Linux operating systems preferred by users.
Raspbian
The Raspbian is a Debian-based computer operating system for Raspberry Pi microcomputer.
There are several versions of Raspbian available including Raspbian Buster and Raspbian Stretch.
Since 2015 it has been officially provided by the Raspberry Pi Foundation as the primary
operating system for the family of Raspberry Pi single-board computers. It is compatible to all
flavors of Raspberry Pi starting from 0 to 4. Raspbian was created by Mike Thompson and Peter
Green as an independent project. The initial build was completed in June 2012. This operating
system is still under active development. It is highly optimized for the Raspberry Pi line's low-
performance ARM CPUs.
RPM Based Linux Distributions
RedHat created the rpm (Redhat Package Management) package format for use in its distribution.
Popular RPM based distributions include:
RedHat Enterprise Linux (RHEL)
CentOS
Fedora
OpenSuse
Mageia
Fedora
Fedora is the upstream open source version of the commercial RedHat Enterprise Linux
distribution, or RHEL for short. What makes Fedora special is it uses newer technology and
packages from the open source world than RHEL. Fedora, like RHEL, uses the yum and dnf
package managers.
OpenSuse
The OpenSuse Linux started out a German translation of Slackware Linux, but eventually grew
into its own distribution. The OpenSuse is known for the KDE desktop and most important is
stability. For package management, OpenSuse uses zypper and its graphical fronted, as well as
the Yast software center.
Mageia
Mageia Linux is a fairly new Linux distribution that is based on Mandrake Linux. Mageia is easy
to install and easy to use. It utilizes urpmi and drakrpm for package management.
Other Linux distributions
Arch Linux
The Arch Linux uses pkg.tar.xz package formats and has it‘s own package manager called
pacman. Arch doesn‘t get available with a graphical installer and the whole installation process is
done via a terminal only. This can be intimidating for new Linux users. The main philosophy
behind Arch is KISS – keep it simple, stupid. The Arch has been forked in some popular
beginner-friendly distributions like Manjaro Linux.
Slackware Linux
The Slackware was founded in 1992 by Patrick Volkerding. This is the oldest Linux distribution
in use today. Slackware does not have a package manager and all the software is compiled by the
system administrator or normal users of the system. Slackware packages are simply source code.
If we really want to learn a lot about the Linux really works, we may use Slackware.
Gentoo Linux
Gentoo Linux is based on the portage package management system. It can be difficult to install
and can even take as long as a couple of days to complete the entire installation process. The
advantage of such an approach is that the software is built for the specific hardware that it will be
running on. Like Slackware, Portage uses application source code. If we like the idea of Gentoo,
but are looking for something beginner friendly, we may try the Sabayon Linux.
KDE -
The KDE of K Desktops Environment was created in 1996 and is probably the most advanced
desktop manager on the market that time. KDE includes several applications that every user
requirements for a complete desktop environment by default. The KDE has some characteristics
that are not present in other desktop managers. The KDE workspace is called as Plasma. Combine
Plasma with the other KDE applications and you get what is called the KDE software compilation
or KDE SC for short.
Popular distributions that use KDE include:
OpenSuse
Kubuntu
Mageia
Slackware
Linux Mint
Gnome
Gnome is a desktop manager created for the community and by the community. This is a great
example of how the open source community actually works. Gnome desktop can simply be
expanded with the use of plug-ins added in it. It doesn‘t require a lot of resources and can be a
great choice for older and slower hardware. Popular distributions that use Gnome include:
Debian
OpenSuse
Fedora
CentOS
Ubuntu 18.04 onward
Cinnamon
The Cinnamon is a fork of the Gnome desktop manager and it is developed by the Linux Mint
community. It recreates the look of Gnome 2 with a modern touch added to it. The minimum
system requirements for Cinnamon are the same as they are for Gnome. Now many variants of
Linux Mint comes with default Cinnamon installed in it.
XFCE
XFCE is an excellent choice for older computers. Light and fast are XFCE‘s two biggest
characteristics. The system needs are a measly 300Mhz CPU and 192Mb of RAM. Popular
distributions that use XFCE include:
Debian
Xubuntu
Fedora
OpenSuse
LXDE
The LXDE is an another fast and light desktop manager. Based on the OpenBox windows
manager, LXDE is suitable for old computers too. Popular distributions using LXDE include:
Lubuntu
Debian
OpenSuse
Linux Mint
Raspbian
Unity Desktop
Unity was developed by Canonical for their Ubuntu Linux distribution for the first time. Till date,
Ubuntu is the only distribution that uses Unity. Unity requires greater hardware resources than
most graphical environments. We‘ll need a 1 GHz CPU and 1Gb RAM in order to get Unity to
work. With those specs, Unity will be so slow that it‘s almost unusable. For Unity, the more
RAM and CPU, the better. Till Ubuntu 17.10, Unity was the default desktop manager present in
the Ubuntu flavors. Now, they are switched to GNOME.
Architecture of Linux
The Linux Operating System‘s architecture primarily has following components:
The Kernel
Hardware layer
System library
Command Shell
System utility
Shell
The shell is an interface between the user and the kernel, and it provide services of the kernel. It
receives commands from the user and executes kernel‘s functions as per. The command Shell is
present in different types of operating systems, which are classified into two categories: command
line shells and graphical shells.
The command line shells provide a command line interface with a set of commands, while the
graphical line shells provide a graphical user interface with more interactivity. Both shells
perform operations, but the graphical user interface shells perform slower than the command line
interface shells due to memory utilization.
There are five types of shells present in Linux:
Korn shell
Bourne shell
C shell
Bourne Again Shell (BASH)
Turbo C Shell
Features of Linux
The main characteristics of Linux operating system are as described below.
Portable: Linux operating system can work on different types of hardware as well as Linux
kernel supports the installation of any kind of hardware platform including all microprocessor
architectures.
Open Source: The source code of Linux operating system is freely available on the internet and,
to enhance the ability of the LINUX operating system, many teams work in collaboration to
develop it. All the software on Linux are free!
Shell: Linux operating system offers a special interpreter program, that can be used to execute
commands of the OS called as command shell. It can be used to do several types of operations
like call application programs, process, IO and memory operations. Many tasks which can‘t be
done by GUI, but can be executed by command shell operations.
Security: Linux offers user security systems using authentication characteristics like encryption
of data or password protection or controlled access to particular files. So it does not have any
access to the viruses and several malware that are found on Windows operating system.
Multiprogramming: Linux operating system is a multiprogramming system, which means
multiple applications can run at the same time concurrently. It has inbuilt support for creation of
processes as well as threads too.
Multi-user: Linux operating system is a multi-user system, which means, multiple users can
access the system resources like RAM, Memory or Application programs at the same time. Any
number of users can be created in this operating system.
Hierarchical File System: Linux operating system affords a standard file structure in which
system files or user files are arranged. For general purpose use, EXT file system is used in Linux.
Currently, EXT4 version of file system is available to use.
System Administrator
Every computer in the world has a system administrator associated with it. It may be that the
maximum system administrators are likely those who decided as to what software and peripherals
devices would be packaged with the machine. That status remains because the maximum users
who get computers for use probably do small to change in the default values. But the minute the
user makes some changes in applications and decides what software and applications to run he
becomes a system administrator.
High duties bring with it some responsibilities. No one whose computer is connected to the
Internet, for instance, has been immune to the effects of poorly administered systems, as
demonstrated by the Distributed Denial of Service (DDoS) attacks and e-mail macro virus attacks
that have shaken the online world in recent years. The scope of these actions of computer
vandalism would have been greatly decreased if system administrators had a better understanding
of their responsibilities.
Linux system administrator is more likely to understand the requirements of active system
administration than are those who run whatever comes on the computer, assuming that things
came from the factory are properly configured. By its very nature as a modern and multiuser
operating system, Linux requires a large degree of administration larger than that of less robust
home market systems. It interprets that even if we are using a single machine connected to the
Internet by a dial-up modem — or not even connected at all — we have the benefits of the same
system employed by some of the biggest enterprises in the world, and will do many of the things
that the software professionals employed by those companies are paid to do the same.
Administering our own system does pertain a degree of learning, but it also means that in setting
and configuring our own system we gain ability and knowing that raise us above mere ―computer
user‖ status.
Linux system administrator is the person who has ―root‖ access, which is to say the one who is
the system‘s ―suprime user‖ (or root user). A standard Linux user is having some limitations. But
the ―root‖ user has uninterrupted access to everything — all user accounts, their home directories,
and all of the files therein; all system configurations; and all files on the system. A certain body
of thought says that no one should ever log in as ―root,‖ because system administration tasks can
be performed more simply and safely through other, more specific means.
And what to do about old accounts? Maybe someone has left the company. What will happen to
his or her account? We probably don‘t want him or her to continue to have access to the
company‘s network. On the other side, we don‘t want to simply delete the account, because it
might contain some necessary data which resides nowhere else.
There are aspects of our business that make World Wide Web access required, but we don‘t want
user‘s spending their working hours spending on the Web.
The administrator or his employer must set up the policies governing the related issues— if in
company, preferably in writing — for the security of all concerned.
5. Adding a new user
Before we create an account for a new user at a private organization, government, or educational
site, it‘s important that the user sign and date a copy of our local user agreement and policy
statement. Users have no specific reason to want to sign up the policy agreement, so it‘s to our
benefit to secure their signatures while we still have some leverage. We find that it takes more
effort to secure a signed agreement after an account has been free. If our process allows for it,
have the paperwork precede the creation of this account. The process of adding a new user
consists of several steps required by the system, two steps that establish a useful environment for
the new user, and several extra steps for our own convenience as an administrator.
Required:
Have the new user sign your policy agreement?
Edit the passwd and shadow files to define the user‘s account.
Add the user to the /etc/group file (not generally required, but fine).
Set the initial password. It follows all basic rules to be a strong password.
Create, chown, and chmod the user‘s home directory. It is required to setup the owner
and mode of use for them.
Configure roles and permissions for the user‘s accounts.
For the user:
Copy default startup files to the user‘s home directory. So, these will be activated every
time user starts the system.
Set the user‘s mail home and establish mail aliases as per the company‘s requirements.
For you:
Verify that the account is set up properly or not.
Add the user‘s contact information and account status to our database.
This list cries out for a script or tool, and fortunately each of our example systems provides one in
the form of a useradd or adduser command.
We must be the supreme user like root to add a user, or on AIX system, we must have User
Admin privileges. This is a perfect place to use sudo i.e. super user do operations.
References:
UNIX and Linux System Administration Handbook 4th Edition by Evi Nemeth, Pearson
Education
Linux System Administration Recipes 1st Edition, by Kemp Juliet, Publisher: Springer-
Verlag Berlin and Heidelberg GmbH & Co. KG
Linux: The Complete Reference, Sixth Edition, by Richard Pearson, Tata McGraw Hill
Company Limited.
Unit 2
Installation of Redhat Linux
Memory Recommendations
Be sure the virtual machine is configured with at least 512MB of memory for the Operating
Systems Red Hat Enterprise Linux 5 or with 256MB of memory for Red Hat Enterprise Linux 3
or Red Hat Enterprise Linux 4. If the memory in the virtual machine is less than the
recommended values, Red Hat Enterprise Linux presents an error message as it loads certain
VMware drivers.
If we don‘t already have VirtualBox, download it from the below link for our platform of choice.
https://www.virtualbox.org/wiki/Downloads
The steps here are simple. We need to start by creating our new machine. Set the OS type as
Linux, and choose Red Hat 64-bit for the version:
Fig. 2.1 VM - Select the Name of Operating System
Next, we need to choose the memory i.e. RAM. For speedier response, let‘s pick 2GB (aka 2048
MB) for our memory:
\
Fig 2.2 Define the RAM size
Next, we choose to create a new virtual disk for the operating system:
Fig. 2.3 Create virtual disk drive
After this, select VDI as the disk type:
Fig. 2.4 Hard Disk type selection
Now, select dynamically allocated as the growth method:
Fig. 2.5 Select the growth method
We want a little bit of play room for our operating system, so make the disk 20 GB. Because it‘s
dynamic, it will only use disk space as it actually gets filled, so we should be ok as long as we
have room locally:
Fig. 2.6 Select file location and size
Now we have the new VM ready to start with, so click the green Start arrow and let‘s build our
server:
Fig. 2.7 Starting Virtual Machine
The Virtual Machine may complain about having nothing on the disk, so just let that error sit and
click the little CD icon on the bottom bar of the VM window, then select Choose Disk Image and
browse to find the RedHat Linux ISO file we have downloaded:
Fig. 2.8 Select the ISO file of Operating System
We can reset the VM to force the restart into the install CD now:
This brings us to the boot screen and we can select the first option to start the installation:
Fig. 2.10 Starting the installation
This brings us to the options page. We will see that the Begin Installation button is not lit because
we need to set up a few things first.
Start by choosing the install type we want. For simplicity, we can select Minimal Install:
Fig. 2.12 Software Selection
We need to configure the network settings, so scroll down enter the Networking and Host Name :
Fig. 2.13 Network Selection
Now, enable the network card in the right hand side with the toggle button:
Back at the configuration screen, go to the host name option and choose our host name:
Now we will see the Begin Installation button is lit up and ready to go.
With the install underway, we can configure the root password and also set up a non-root user for
you to use.
Fig. 2.17 Installation Summary
Fig. 2.18 Set the passwords
The installation is completed. The results may vary depending on Internet and hard drive speeds.
Partitions
A hard disk can be divided into several multiple parts called as partitions. Each partition functions
as if it were a separate hard disk. The idea is that if we have one hard disk, and want to have, say,
two operating systems on it, we can divide the disk into two partitions. So, each operating system
uses its partition as it wishes and doesn't touch the other ones. This way the two operating
systems can co-exist peacefully on the same hard disk. Without partitions one would have to buy
a hard disk for each operating system.
Partition types
The partition tables (the one in the MBR, and the ones for extended partitions) contain one byte
per partition that identifies the type of that partition. This attempts to identify the operating
system that uses the partition, or what it uses it for. The aim is to make it possible to avoid having
multiple operating systems accidentally using the same partition. However, at actual, operating
systems do not really care about the partition type byte; e.g., Linux doesn't care at all what it is.
There is no standardization office to specify what each byte value means, but as far as Linux is
concerned, check the list of partition types as per the fdisk program.
Fig. 2.29 Types of Partitions
Partitioning a hard disk
There are many programs available for creating and removing partitions from the hard disk.
Many operating systems have their own, and it can be a fine idea to use each operating system's
own, just in case it does something unusual that the others can't. Many of the programs are called
fdisk, including the Linux one, or variations. The details on using the Linux fdisk given on its
man page. The cfdisk command is similar to fdisk, but has a nicer full screen user interface.
While using IDE disks, the boot partition (that is, the partition with the bootable kernel image
files) must be completely within the first 1024 cylinders. This is because the disk is used via the
BIOS during boot (before the system goes into protected mode), and BIOS can't handle more than
1024 cylinders. It is sometimes possible to use a boot partition that is only partly within the first
1024 cylinders. This works as long as all the files that are read with the BIOS are within the first
1024 cylinders. Since this is difficult to arrange, it is a very bad idea to do it; we never know
when a kernel update or disk defragmentation will result in an unbootable system. Therefore,
make sure our boot partition is completely within the first 1024 cylinders.
This may no longer be true with newer versions of LILO (Linux Loader) that support LBA
(Logical Block Addressing). Some newer versions of the BIOS and IDE disks can, in fact, handle
disks with more than 1024 cylinders. If we have such a system, we can forget about the problem;
if we aren't quite sure of it, put it within the first 1024 cylinders.
Each partition should have an even number of sectors, since the Linux file systems use a 1
kilobyte block size, i.e., two sectors. An odd number of sectors will result in the last sector being
unused. This won't result in any problems, but it is ugly, and some versions of fdisk will warn
about it.
Changing a partition's size usually requires first backing up everything we want to save from that
partition (preferably the whole disk, just in case), deleting the partition, creating new partition,
then restoring everything to the new partition. If the partition is growing, we may need to adjust
the sizes (and backup and restore) of the adjoining partitions as well.
Since changing partition sizes is difficult, it is preferable to get the partitions right the first time,
or have an effective and simple to use backup system. If we are installing from a media that does
not require much human intervention (say, from CD-ROM, of Pen drives), it is often easy to play
with different configuration at first. Since we don't already have data to back up, it is not so
agonized to modify partition sizes several times.
There is a program for MS-DOS, called fips, which resizes an MS-DOS partition without
requiring the backup and restore, but for other file systems it is still necessary.
The fips program is included in most Linux distributions. The commercial partition manager
―Partition Magic‖ also has a similar facility but with a nicer interface. Make sure we have a recent
backup of any important data before we try changing partition sizes. The program parted can
resize other types of partitions as well as MS-DOS, but sometimes in a limited manner.
The warning is automatically repeated a few times before the boot, with shorter and shorter
intervals as the time runs out.
When the real shutting down starts after any delays, all filesystems (except the root one) are
unmounted, user processes (if anybody is still logged in) are killed, daemons are shut down, all
filesystem are unmounted, and generally everything settles down. When that is done, init prints
out a message that we can power down the machine. Then, and only then, should we move our
fingers towards the power switch.
Sometimes, although rarely on any good system, it is impossible to shut down properly. For
instance, if the kernel panics and crashes and burns and generally misbehaves, it might be
completely impossible to give any new commands, hence shutting down properly is somewhat
difficult, and just about everything we can do is hope that nothing has been too severely damaged
and turn off the power. If the troubles are a bit less severe (say, somebody hit your keyboard with
an axe), and the kernel and the update program still run normally, it is probably a good idea to
wait a couple of minutes to give update a chance to flush the buffer cache, and only cut the
power after that.
In the old days, some people like to shut down using the command sync three times, waiting for
the disk I/O to stop, then turn off the power. If there are no running programs, this is equivalent to
using shutdown. However, it does not unmount any filesystems and this can lead to problems
with the ext2fs ―clean filesystem'‖ flag. The triple-sync method is not recommended.
Rebooting
Rebooting means booting the system again. This can be accomplished by first shutting it down
completely, turning power off, and then turning it back on. A simpler way is to ask shutdown to
reboot the system, instead of merely halting it. This is accomplished by using the -r option to
shutdown, for example, by giving the command shutdown -r now.
Most Linux systems run shutdown -r now when ctrl-alt-del is pressed on the keyboard. This
reboots the system. The action on ctrl-alt-del is configurable, however, and it might be better to
allow for some delay before the reboot on a multiuser machine. Systems that are physically
accessible to anyone might even be configured to do nothing when ctrl-alt-del is pressed.
2.5 init
init is one of those programs that are absolutely essential to the operation of a Linux system, but
that we still can mostly ignore. A good Linux distribution will come with a configuration for init
that will work for most systems, and on these systems there is nothing we need to do about init.
Usually, we only need to worry about init if we hook up serial terminals, dial-in (not dial-out)
modems, or if we want to change the default run level.
When the kernel has started itself (has been loaded into memory, has started running, and has
initialized all device drivers and data structures and such), it finishes its own part of the boot
process by starting a user level program, init. Thus, init is always the first process (its process
number is always 1).
The kernel looks for init in a few locations that have been historically used for it, but the proper
location for it (on a Linux system) is /sbin/init. If the kernel can't find init, it tries to run /bin/sh,
and if that also fails, the startup of the system fails.
When init starts, it finishes the boot process by doing a number of administrative tasks, such as
checking filesystems, cleaning up /tmp, starting various services, and starting a getty for each
terminal and virtual console where users should be able to log in.
After the system is properly up, init restarts getty for each terminal after a user has logged out (so
that the next user can log in). init also adopts orphan processes: when a process starts a child
process and dies before its child, the child immediately becomes a child of init. This is important
for various technical reasons, but it is good to know it, since it makes it easier to understand
process lists and process tree graphs. There are a few variants of init available. Most Linux
distributions use sysvinit, which is based on the System V init design. The BSD versions of Unix
have a different init. The primary difference is run levels: System V has them, BSD does not (at
least traditionally). This difference is not essential. We'll look at sysvinit only.
Services that get started at a certain runtime are determined by the contents of the various rcN.d
directories. Most distributions locate these directories either at /etc/init.d/rcN.d or /etc/rcN.d.
(Replace the N with the run-level number.)
In each run-level we will find a series of if links pointing to start-up scripts located in /etc/init.d.
The names of these links all start as either K or S, followed by a number. If the name of the link
starts with an S, then that indicates the service will be started when you go into that run level. If
the name of the link starts with a K, the service will be killed (if running).The number following
the K or S indicates the order the scripts will be run. Here is a sample of what an /etc/init.d/rc3.d
may look like.
powerwait
Allows init to shut the system down, when the power fails. This assumes the use of a
UPS, and software that watches the UPS and informs init that the power is off.
ctrl-alt-del
Allows init to reboot the system, when the user presses ctrl-alt-del on the console
keyboard. Note that the system administrator can configure the reaction to ctrl-alt-del to
be something else instead, e.g., to be ignored, if the system is in a public location. (Or to
start nethack.)
sysinit
Command to be run when the system is booted. This command usually cleans up /tmp,
for example.
MS-DOS
Compatibility with MS-DOS (and OS/2 and Windows NT) FAT filesystems.
msdos
Extends the msdos filesystem driver under Linux to get long filenames, owners, permissions,
links, and device files. This allows a normal msdos filesystem to be used as if it were a Linux
one, thus removing the need for a separate partition for Linux.
vfat
This is an extension of the FAT filesystem known as FAT32. It supports larger disk sizes than
FAT. Most MS Windows disks are vfat.
iso9660
The standard CD-ROM filesystem; the popular Rock Ridge extension to the CD-ROM standard
that allows longer file names is supported automatically.
nfs
A networked filesystem that allows sharing a filesystem between many computers to allow easy
access to the files from all of them.
smbfs
A networks filesystem which allows sharing of a filesystem with an MS Windows computer. It is
compatible with the Windows file sharing protocols.
hpfs
This is the OS/2 filesystem.
sysv
SystemV/386, Coherent, and Xenix filesystems.
NTFS
The most advanced Microsoft journaled filesystem providing faster file access and stability over
previous Microsoft filesystems.
The choice of filesystem to use depends on the situation. If compatibility or other reasons make
one of the non-native filesystems necessary, then that one must be used. If one can choose freely,
then it is probably wisest to use ext3, since it has all the features of ext2, and is a journaled
filesystem.
There is also the proc filesystem, usually accessible as the /proc directory, which is not really a
filesystem at all, even though it looks like one. The proc filesystem makes it easy to access
certain kernel data structures, such as the process list (hence the name). It makes these data
structures look like a filesystem, and that filesystem can be manipulated with all the usual file
tools. For example, to get a listing of all processes one might use the command
Filesystem comparison
Table 2.2 Filesystems properties.
FS Name Year Original OS Max File Size Max FS Size Journaling
Introduced
FAT16 1983 MSDOS V2 4GB 16MB to 8GB N
FAT32 1997 Windows 95 4GB 8GB to 2TB N
HPFS 1988 OS/2 4GB 2TB N
NTFS 1993 Windows NT 16EB 16EB Y
HFS+ 1998 Mac OS 8EB ? N
UFS2 2002 FreeBSD 512GB to 32PB 1YB N
ext2 1993 Linux 16GB to 2TB4 2TB to 32TB N
ext3 1999 Linux 16GB to 2TB4 2TB to 32TB Y
ReiserFS3 2001 Linux 8TB8 16TB Y
ReiserFS4 2005 Linux ? ? Y
XFS 1994 IRIX 9EB 9EB Y
JFS ? AIX 8EB 512TB to 4PB Y
VxFS 1991 SVR4.0 16EB ? Y
ZFS 2004 Solaris 10 1YB 16EB N
References:
1. Red Hat Linux Networking & System Administration, by Terry Collings, Kurt Wall, 3rd
Edition, 2005, Wiley Publishing
2. Red Hat RHCSA/RHCE 7 Cert Guide: Red Hat Enterprise Linux 7 (EX200 and EX300)
(Certification Guide), 2015 by Sander van Vugt, Pearson IT Certification
3. https://vmware.com
4. https://opensource.com
5. https://tldp.org
UNIT -3
Introduction
Any operating system is act as interface between hardware and user in other word it is
collection of program with provide basic utilities to the user ,if you want to use hardware
to do some task you need to tell to the operating system what you want to do ? For
example it may be addition of two number, opening some file or software etc. Then
question will come in your mind, how to talk with operating system?
The answer is you need to give command to the operating system and the operating
system program which will help us to give this command to an operating system is Shell.
In other word shell is interface through which we can use operating system services. A
shell hides all complex details of the operating system specifically kernel, which is the
lowest-level or core component of any operating systems.
Bash Shell
Bash is a UNIX shell and command language written by Brian Fox in 1989, It offers
more functions over sh shell for both types of user i.e. programmer and end user. In
addition, most sh scripts can be run by Bash as it is. The important features of Bash
include: Command line editing, Unlimited size command history ,Job Control, Shell
Functions and Aliases ,Indexed arrays of unlimited size, Integer arithmetic in any base
from two to sixty-four .it has been used widely as the default login shell for
many Linux distributions .
Useful Bash Key Sequences Sometimes, you will enter a command from the Bash
command line and nothing, or something totally unexpected, will happen. If that occurs,
it is good to know that some key sequences are available to perform basic Bash
management tasks. Here is a short list of the most useful of these key sequences:
Ctrl+C Use this key sequence to quit a command that is not responding (or simply is
taking too long to complete). This key sequence works in most scenarios where the
command is active and producing screen output.
Ctrl+D This key sequence is used to send the end-of-file (EOF) signal to a command.
Use this when the command is waiting for more input. It will indicate this by displaying
the secondary prompt >.
Ctrl+R This is the reverse search feature. When used, it will open the reverse-i-search
prompt. This feature helps you locate commands you have used previously. The feature is
especially useful when working with longer commands. Type the first characters of the
command, and you will immediately see the last command you used that started with the
same characters.
Ctrl+Z Some people use Ctrl+Z to stop a command. In fact, it does stop your command,
but it does not terminate it. A command that is interrupted with Ctrl+Z is just halted until
it is started again with the fg command as a foreground job or with the bg command as a
background job.
Ctrl+A This keystroke brings the cursor to the beginning of the current command line.
Ctrl+B This moves the cursor to the end of the current command line
1. Pwd :- sometimes you may want to know where exactly you are. Linux pwd is a
command to print the name of current/working directory. When we are ―lost‖ into
a deep directory, we can always reveal where we are.
2. ls :- If you are a Linux user, there will be not a single day that you have not used
ls command. A very simple and powerful command used to list files and
directories from current directory.
Using -l character (small L letter), will display a long list the content of current
directory which contains not only the name of the file, but also owner, group
owner, link count, permissions Of each file.
In Linux, a file begins with ―.‖ (dot sign) is a hidden file. To show it on ls
command, we can use -a parameter.
We can also create multiple directories at the same time. Let say we want to
create directories named dir1, dir2 and dir3.
If you want to create sub-directories, you will need to use -p parameter. This
parameter will create parent directory first, if it cannot find it. And it will create
sub directory in it .let us see we don‘t have directory with name test we try to
create sub directory in it with name sub.
4. stat :- stat is command which will give memory storage information about file
and last access information of this file following demo gives information about
MyFile.txt.
5. touch :- This command is used for Manipulating the timestamps of files, Create
an empty file , Create a file with a particular timestamp ,to trigger a rebuild of
code.
The other options like –a , -m are used to modify access and modify time of any
file.
You can remove all file with same extension as follows. In follow example we
will remove all file with .c extension.
7. who command is a tool print information about users who are currently logged in.
On most Linux distribution, who command is already installed. to use it, just
type who on your console.
8. alias :- The alias command lets us give our own name to a command or sequence
of commands. We can then type our short name, and the shell will execute the
command or sequence of commands for us. In the following example we have
created alias named myCommand for ls command so both my and ls command
gives us same output.
9. Cat :-The cat command (short for ―concatenate‖) lists the contents of files to the
terminal window. If we specify only one file name with it if we specify two file
name with cat command it will show content of both file in concatenate form to
do actual concatenation of two file we need to use redirection that we will see in
coming section. This is faster than opening the file in an editor, and there‘s no
chance of accidentally altering the file.
10. Cd :- While working Linux console we may need to get into specific directory
that is we need to change our current working directory. the command used for it
is cd in the following example we are changeing from root to mydir directory.
11. Echo :- The echo command prints (echoes) a string of text to the terminal
window.The command below will print the words ―Welcome to Linux‖ on the
terminal window.
12. Man :- The most important source information available for use of Linux
commands is man, which is short for the system programmer‘s ―manual.‖ This
command will help you to find all information about any command on linux let us
see the following example , using man command we cand find the information
about rm command.
# man rm
Now we have seen different command and their use, so we are familiar with how
to execute command on Linux. Linux command set is too big so following is the
list of few commands with its use. And if you still wants to know more about any
of the command you have man command with you.
Command use
date Shows the current date and time. You can specify
the format you want to view the date as well. As
an example, by using 'date +%D' you can view the
date in 'MM/DD/YY' format
On every file and directory starts from the root directory the top most directory in
file hierarchy root directory.
Only the root user of the system has written right under this directory.
Please note that /root is root user‘s home directory, which is not same as /.
Contains library files that supports the binaries located under /bin and /sbin
Library filenames are either ld* or lib*.so.*
For example: ld-2.11.1.so, libncurses.so.5.7
We have studied some of the file and directory handling command like mkdir,ls,touch etc
in previous section. Let us now learn some more file and directory handling command.
More :- let consider that you are working with Linux, you will find a many file in text
format in Linux . like Configuration files and log files are always kept as text file. But
those files usually has hough content. You can not view them all at time in one page. So
we need pagination to those files. And to achieve this, we can use Linux more command.
More command is a command for displaying a long text file per page at a time. Let us
consider we want to see our syllabus page by page the we can use more command as
follows
# more README.txt
In the above screen the left corner 90% indicate that it is showing you 90% content of file
by pressing enter you see the next lines.
When you run more command, it will fill your screen with the content of the file you are
seeing using more. You can limit it into some lines for each page. To do this you can
use -num option.
For example, you want to limit the lines only 6 lines for each page. Then you can type
# more -6 README.txt
There are many option which we can use with more command, to know this use man
pages.
Following are list of file and directory relater commands with its use.
command Use
ls -al Displays all information about files/directories. This includes
displaying all hidden files as well
mv file1 file2 Moves files from one place to another/renames file1 to file2
chmod 777 /test.c Sets rwx permission for owner , group and others
cd .. Goes up one level of the directory tree
cd Goes to $HOME directory
One of the most powerful features of the Linux command line is The piping and
redirection options. Piping is used to send the result of a one command to another
command, and redirection sends the output of a command to a file may or may not be a
regular file, but it can also be a device file, let us see the following examples.
This example will help you to understand how a pipe is used to add functionality to a
command. Now you will execute a command where the output does not fit on the screen.
Then we will see how by piping this output through less, you can see the output screen by
screen.
Now execute command in give sequence. Open a shell, and use su - to become the root.
Enter the root password. once you login as root type the command ps aux and hit enter
this command provides you list of all the processes that are currently running on your
computer. You will notice that the list is too long and it does not fit on your computer
screen. now to see the output in proper manner i.e. page by page pipe will be useful., use
ps aux | less. And now the output of ps is now sent to less, which outputs it so that you
can browse it page by page.
Another very useful command that is often used in a pipe creation is grep. It is used as a
filter to show just the information that you want to see. For example, that you want to
check whether a user with the name linda exists in the user database /etc/passwd. One
solution is that open the file with any viewer like cat or less and then check the contents
of the file for string you are seeking is present in the file or not. This is time consuming
human error may come while searching the data there is a much easier reliable solution
for this , pipe the contents of the file to the filter grep, which would choose all of the lines
that contain the string mention as an argument of grep command. This command would
read cat /etc/passwd | grep linda.
You will use the ps aux command to show a list of all processes on your system, but this
time you will pipe the output of the command through the grep command, which will
selects the information you‘re seeking.
Type ps aux and hit the enter key to display the list of all the processes that are running
on your computer. As we know it‘s not easy to find the exact information you need so
now use ps aux | grep blue to select only the lines that contain the text blue. You'll now
see two lines, one displaying the name of the grep command you used and another one
showing you the name of the Bluetooth applet. Now execute next step, in this step you
are going to make sure you don‘t see the grep command itself. To do this, the command
grep -v grep is added to the pipe. The grep option -v excludes all lines contain a specific
string. The command you‘ll enter to get this result is ps aux | grep blue | grep -v grep.
Redirection
Redirection sends the result of the one command to a file. While this file can be a text
file, it can also be a special file like device file. The following example will help you to
understand how redirection is used to redirect the standard output (STDOUT), normally it
is written to the current console to a file.
First you‘ll use the ps aux command without redirection. The results of the command will
be written to the terminal window. Now let redirect the output of the command to a file.
In the final step, you‘ll display the contents of the file using the less utility.
From a console window, use the command ps aux. You will get the output on the current
console. Now execute command ps aux > ~/psoutput.txt. You will not get the output of
the command because it is redirected to a file that is created in home directory, which is
nominated by the ~ sign. to show the contents of the psoutput file use the command less
~/psoutput.txt.
Let us see another way of redirection instead of redirecting output of commands to the
files; the opposite is also possible with redirection. For example, you may send the
content of a text file to a command that will use that content of the file as input. Let us
execute following commands .
Open the console and type mail root. This opens the command-line mail program to send
a message to the user root when mail prompts for a subject, type Test message as the
subject text, and press Enter. Then mail command displays a blank line where you can
type the message body. In a real message, here is where you would type your message,
however, you don‘t need a message body, and you want to close the input immediately.
To do this, type a . (dot) and press Enter. The mail message has now been sent to the user
root. Now you‘re going to specify the subject as a command-line option using the
command mail -s test message. The mail command immediately returns a blank line,
where you‘ll enter a . (dot) again to tell the mail client that you‘re done. In the third
attempt, you enter everything in one command, which is useful if you want to use
commands like this in automated shell scripts. Type this command: mail -s test message
< As you can see, when using redirection of the STDIN, the dot is fed to the mail
command immediately, and you don‘t have to do anything else to send the message.
When using redirection, you should be aware that it is possible not only to redirect
STDOUT and STDIN. Commands can also produce error output. This error output is
technically referred to as STDERR. To redirect STDERR, use the 2> construction to
indicate that you are interested only in redirecting error output. This means that you
won‘t see errors anymore on your current console, which is very helpful if your
command produces error messages as well as normal output. The next exercise
demonstrates how redirecting STDERR can be useful for commands that produce a lot of
error messages.
If you want use vi understand that its work-alike editors is modality. Most programs or
editor has very simple UI in which it accepting input and placing it at the cursor.
But vi has different modes. When you will start vi, you will be in ―Normal‖ mode, which
is actually a command mode. When you are in Normal mode, whatever you type is
considered not to be input, but commands that vi will try to execute.
This may sound a little crazy, but it is actually a very powerful way to edit documents.
Even if you don‘t like it, but in Linux world is one of the most popular editor so you need
to learn it,but on other hand if you enjoy working at a command line, then you may end
up loving vi.
Another important reason why you should become familiar with vi is that some other
commands are based on it. For example, to edit quota for the end users on your server,
you would use command edquota, which is a macro built on vi. If you want to set
permissions for the sudo command, use visudo, which, as you can guess, is also a macro
built on top of vi.
Let us start learn how to use Vi , type vi and hit the enter key if you want to open specific
file from cuttent working directory type that file name with vi to open this file , After
starting a vi editor , as discussed before we can start entering text there is the command
mode, which is used to enter new commands. The vi offers you a lot of choices. you can
choose between a number of methods to enter insert mode. Use i to insert text at the
current cursor position. Use a to append text after the current position of the cursor. Use o
to open a new line under the current position of the cursor , Use O to open a new line
above the current position of the cursor. After entering insert mode, you can enter text,
and vi will work as any other editor
To save your work, go back to command mode and use the appropriate commands. The
magic key to go back to the command mode from insert mode is Esc.
After coming back to command mode, you need to use the appropriate command to save
your work. The most commonly used command is :wq! , To exit vi without saving
changes, hit Escape ensure you are in Normal mode, and then type :q!
Using an ! at the end of a command is potentially dangerous; if a previous file with the
same name already exists, vi will overwrite it without any further warning. Because !
Mark after command q ot wq tell to the command line interpreter to do the task without
give any warning message.
When it comes to editing any file the tree most useful operation we will perform are cut
,copy ,paste.To cut and copy the contents of a file is easy, you can use the v command,
which enters visual mode. In visual mode, you can select a block of text. After selecting
the block, you can cut, copy, and paste it. Use d to cut the selection text. Cut command
will remove the selection and place it in a buffer memory. Use y to copy the selection
content to the selected area reserved for that purpose in your server‘s memory. Use p to
paste the selection under the current line, or use P if you want to paste it above the
current line. This will copy the selection you have just placed in the reserved area of your
server‘s memory back into your document. For this purpose, it will always use your
cursor‘s current position.
Deleting Text
Another action you will often do when working with vi is deleting text. There are many
methods that can be used to delete text with vi. The easiest is from insert mode: just use
the Delete and Backspace keys to get rid of any text you like. This works just like a word
processor. Some options are available from vi command mode as well. Use x to delete a
single character. This has the same effect as using the Delete key while in insert mode.
Use dw to delete the rest of the word. That is, dw will delete anything from the current
position of the cursor to the end of the word. Use D to delete from the current cursor
position up to the end of the line.Use dd to delete a complete line.
Managing Software
Red Hat Enterprise Linux and many other Linux distributions group their software
together in packages and they referred it as RPM Package Manager (RPM). The "R" in
RPM originally stood for "Red Hat" but now changed to the recursive "RPM"
Understanding RPM
When Linux was first design, most of the software used in Linux systems was passed
around in tar balls. A tar ball is a single archive file (created using the tar command) that
can contain multiple files that need to be installed. Unfortunately, there were no rules for
what needed to be in the tar ball neither there was any specifications of how the software
in the tar ball was to be installed. Working with tar balls was not convenient for several
reasons no standardization is one of the them. When using tar balls, there was no way to
track what was installed. An updating and de-installing tar ball was much difficult.There
are different issues, the tar ball contained source files that still needed to be compiled,
and in some other case the tar ball had a nice installation script. Or somewhere the tar ball
would just include a bunch of files including a README file explaining what to do with
the software. The ability to trace software was needed to overcome the disadvantages of
tar balls. The Red Hat Package Manager (RPM) is one of the standards designed to
overcome the drawback of tar ball. An RPM is an archive file. It is created using
command cpio. However, it‘s no regular archive. With RPM, there is also metadata
describing what this package contains and it also contain one more important information
that where those different files should be installed. This well organized design of RPM
makes it is easy for Linux administrator to query exactly what is happening in it. Another
benefit of using RPM is that its database is created in the /var/lib/rpm directory. This
database keeps track of the exact version of files that are installed on the computer. Thus,
for an administrator, it is possible to query individual RPM files to see their contents.
You can also query the database to see where a specific file comes from or what exactly
is in the RPM.
RPM is efficient in managing software but there is still one inconvenience that must be
dealt with software dependency. To standardize the software, many programs used on
Linux use libraries and other common components provided by other software packages.
That means before install one package, there are some other packages required to be
present on the system. This is known as a software dependency. We need to take into
consideration that the package which we are installing as dependency package which may
be depends on other package to get installed. In the installation process if system does
not find required packages it will show the ―Failed dependencies‖ message .Though
working with common components provided from other packages is a good thing even if
only for the uniformity of appearance of a Linux distribution in practice doing so could
lead to real problems.
The Meta Package Handler is solution for this dependency hell. Meta Package Handler in
Red Hat is known as yum (Yellowdog Update Manager), this works with repositories,
which are the installation sources that are consulted whenever a user wants to install a
software package. In the repositories, all software packages of your distribution are
typically available. While installing a software package using yum install some package,
yum first checks whether there are any dependencies. If there are, yum checks the
repositories to see whether the required software is available in the repositories, and if it
is, the administrator will see a list of software dependencies that yum wants to install. So,
this is how yum is resolving the problem of dependency hell.
If you don‘t have Red Hat server installed then doesn‘t have access to the official R H N
(Red Hat Network) repositories, in this case you will need to create your own
repositories. This procedure is also useful if you want to copy all of your RPM‘s to a
directory and use that directory as a repository. Let us see how to do this
Let us preparer our system to make your own repositories, copy all of the RPM files from
the Red Hat installation DVD to a directory that you will create on disk. Next you will
install and run the createrepo package and its dependencies. This package is used to
create the metadata that yum uses while installing the software packages. While installing
the createrepo package, you will see that some dependency problems have to be handled
as well.
1. Use mkdir /repo to create a directory that you can use as a repository in the root of
your server‘s fi le system.
2. Insert the Red Hat installation DVD in the optical drive of your server. Assuming that
you run the server in graphical mode, the DVD will be mounted automatically.
3. Use the cd /media/RHEL[Tab] command to go into the mounted DVD. Next use cd
Packages, which brings you to the directory where all RPMs are by default. Now use cp *
/repo to copy all of them to the /repo directory you just created. Once this is finished, you
don‘t need the DVD anymore.
4. Now use cd /repo to go to the /repo directory. From this directory, type rpm -ivh
createrepo. This doesn‘t work, and it gives you a ― Failed dependencies‖ error. To install
createrepo, you fi rst need to install the deltarpm and python-deltarpm packages. Use rpm
-ivh deltarpm python-deltarpm to install both of them. Next, use rpm -ivh createrepo
again to install the createrepo package.
5. Once the createrepo package has been installed, use createrepo /repo, which creates the
metadata that allows you to use the /repo directory as a repository. This will take a few
minutes. When this procedure is fi nished, your repository is ready for use.
Managing Repositories
In the preceding section, you learned how to turn a directory that contains RPM s into a
repository, just marking a directory as a repository is not sufficient. To use your newly
created repository, you need to tell your server where it can find this repository and for
this, you need to create a repository file in the directory /etc/yum.repos.d. You will
probably already have some repository files in this directory. You can see the content of
the rhel-source. repo file that is created by default.
See the above execution. You will find all elements that a repository file should contain.
First, between square brackets there is an identifier for the repository. It is immaterial
what you use here; the identifier helps you to identify the repository. The same goes for
the name parameter it gives a name to the repository. The important parameter is baseurl.
It will help where the repository can be found in URL format. As you can see in this
example, an FTP server at Red Hat is specified. Alternatively, you can also use URLs
that refer to a website or to a directory that is local on your server‘s hard drive. In the
latter case, the repository format looks like file:///yourrepository. Some people are
confused about the third slash in the URL, but it really has to be there. The file:// part is
the URI, which tells yum that it has to look at a fi le, and after that, you need a complete
path to the file or directory, which in this case is /yourrepository. Next the parameter
enabled specifies whether this repository is enabled. A 0 indicates that it is not, and if you
really want to use this repository, this parameter should have 1 as its value. The last part
of the repository specifies if a GPG file is available. Because RPM packages are installed
as root and can contain scripts that will be executed as root without any warning, it really
is important that you are confident that the RPM. you are installing can be trusted. GPG
helps in guaranteeing the integrity of software packages you are installing. To check
whether packages have been tampered with, a GPG check is done on each package that
you‘ll install.
To do this check, you need the GPG files installed locally on your computer. As you can
see, some GPG files that are used by Red H at are installed on your computer by default.
Their location is specified using the gpgkey option. The other option gpgcheck=1 tells
yum that it has to perform the GPG integrity check. If you‘re having a hard time confi
guring the GPG check, you can change this parameter to gpgcheck=0, which completely
disables the GPG check for RPM s that are found in this repository.
Let us now want to install a package called Firefox 14, just run the command it will
automatically find and install all required dependencies for Firefox
In the above command execution it will ask you confirmation before installing package
on your system. If you want to install packages automatically without asking any
confirmation, use option -y as follows
in some case if you want to remove a package completely with their all dependencies,
run the following command .
Let‘s say you have outdated version of MySQL package and you want to update it to the
latest stable version. The how yum will help us update it ? , run the following command it
will automatically resolves all dependencies issues and install them.
Use the list function to search for the specific package with name. For example to search
for a package called openssh, use the command.
If you do not remember the exact name of the package you are looking for, then use
search function with yum to search all the available packages which are match the name
of the package you have give. For example we want to search all the packages that match
the word. Then execute following command
Say you would like to know information of a package before installing it. to know
information about any package use following command. Let us say we want to know
information about Firefox.
to list all the available packages in the Yum database, execute following command.
To find how many of installed packages on your system have updates available, to check
execute following command
# yum check-update
# yum update
# yum grouplist
To learn more option which can use with yum, use man pages.
Querying Software
Once software installed, it can be quite helpful to query software. This is a generic way to
get additional information about software installed on your system. In addition, querying
RPM packages also helps you solve specific problems with packages,
There are many ways to query software packages. Before finding out more about your
currently installed software, be aware that there are two ways to perform a query. You
can query packages that are currently installed on your system, and it‘s also possible to
install package files that haven‘t installed. To query an installed package, you can use one
of the rpm -q options. To get information about a package that hasn‘t yet been installed,
you need to add the -p option. To request a list of files that are in the samba-common
RPM file, for example, you can use the rpm -ql samba-common command, if this
package is installed. In case it hasn‘t yet been installed, you need to use rpm -qpl samba-
common-[version-number].rpm, where you also need to refer to the exact location of the
samba-common file. If you omit it, you‘ll get an error message stating that the samba-
common package hasn‘t yet been installed
A very common way to query RPM packages is by using rpm -qa. This command
generates a list of all RPM packages that are installed on your server and thus provides a
useful means for finding out whether some software has been installed. Let us consider, if
you want to check whether the media-player package is installed or not, you can use rpm
-qa | grep media player. A useful modification to rpm -qa is the -V option, which shows
you if a package have been modified from its original version. Using rpm -qVa thus
allows you to perform a basic integrity check on the software you have on your server.
Every file that is shown in the output of this command has been modified since it was
originally installed. Note that this command will take a long time to complete. Also note
that it is not the best way, nor the only one, to perform an integrity check on your server.
Tripwire offers better and more advanced options. Listing 4.8 displays the output of rpm
–qVa
Query options for installed packages
To query packages that you haven‘t installed yet, you need to add the option –p. Finally,
there is one more useful query option: rpm -qf. You can use this option to find out from
which file a package originated.
It may happened that Software on your computer may get damaged . If this caes, we can
extract files from the packages and copy them to the original place of the file on our
system. The RPM package consists of two important parts the one contains the metadata
which describes what this package contain and a cpio archive which contains the actual
files in the package. Let us consider that our one file is damaged, we cto find out from
what package the file originate using the rpm -qf query command. Now use rpm2cpio |
cpio -idmv to extract that files from the package and store it at some temporary location.
And then add this copy it at appropriate location.
In this chapter, you learned how to install, query, and manage software on your Linux
server. You also learned how you can use the RPM tool to get extensive information
about the software installed on your server.
UNIT-4
Authorization in Linux is provided by users and groups. Each user is associated with a
unique positive integer called the user ID (uid). The unique id identifies the user
running the process, and is called the process‘ real uid. Users refer to themselves and
other users through usernames, and not with the numerical uid values. Usernames and
their corresponding uids are stored in
E.g. During login, the user provides a username and password to the login program. If
given a valid username and the correct password, the login program spawns the user‘s
login shell, which is also specified in /etc/passwd, and makes the shell‘s uid equal to
that of the user. Child processes inherit the uids of their parents.
The uid 0 is associated with a special user known as root. The root user has special
privileges, and can do almost anything on the system. For example, only the root user
can change a process‘ uid. Consequently, the login program runs as root.
Each user may belong to one or more groups, including a primary or login group, listed
in /etc/passwd, and possibly a number of supplemental groups, listed in /etc/group.
Each process is therefore also associated with a corresponding group ID (gid), and has
a real gid, an effective gid, a saved gid, and a file system gid. Processes are generally
associated with a user‘s login group, not any of the supplemental groups.
Certain security checks allow processes to perform certain operations only if they meet
specific criteria. Historically, UNIX has made this decision very black-and-white:
processes with uid 0 had access, while no others did. Recently, Linux has replaced this
security system with a more general capabilities system. Instead of a simple binary
check, capabilities allow the kernel to base access on much more fine-grained settings.
If you want to add users from the command line, useradd is the command to use. Some
other commands are available as well. Here are the most important commands for
managing the user environment:
useradd - This command is used for adding users to the local authentication system.
Using useradd is simple. In its easiest form, it just takes the name of a user as its
argument, so user add parag will create a user called parag on your server. The useradd
command has a few options. If an option is not specified, useradd will read its
configuration file in
/etc/default/useradd. In this configuration file, useradd finds some default values. These
specify the groups the user will become a member of, where to create the user‘s home
directory, and more.
# useradd defaults
file GROUP=100
HOME=/
home
INACTIV
E=-1
EXPIRE=
SHELL=/bin/bash SKEL=/etc/skel
CREATE_MAIL_SPOOL=yes
You can set different properties to manage users. To set up an effi cient server, it‘s
important to know the purpose of the settings. For every user, the group membership,
UID, and shell default properties are set.
Permissions
The standard file permission and security mechanism in Linux is the same as that of
UNIX. Each file is associated with an owning user, an owning group, and a set of
permission bits. The bits describe the ability of the owning user, the owning group, and
everybody else to read, write, and execute the file; there are three bits for each of the
three classes, making nine bits in total. The owners and the permissions are stored in
the file‘s inode.
For regular files, the permissions are rather obvious they specify the ability to open a
file for reading, open a file for writing, or execute a file. Read and write permissions
are the same for special files as for regular files, although what exactly is read or
written is up to the special file in question. Execute permissions are ignored on special
files. For directories, read permission allows the contents of the directory to be listed,
write permission allows new links to be added inside the directory, and execute
permission allows the directory to be entered and used in a pathname. The following
table lists each of the nine permission bits, their octal values (a popular way of
representing the nine bits), their text values (as ls might show them), and their
corresponding meanings.
In addition to historic UNIX permissions, Linux also supports access control lists
(ACLs). ACLs allow for much more detailed and exacting permission and security
controls, at the cost of increased complexity and on-disk storage.
Managing Passwords
To access the system, a user needs a password. By default, login is denied for the users
you create, and passwords are not assigned automatically. Thus, your newly created
users can‘t do anything on the server. To enable these users, assign passwords using the
passwd command.
The passwd command is easy to use. A user can use it to change his password. If that
happens, the passwd command will first prompt for the old password and then for the
new one. Some complexity requirements, however, have to be met. This means, in
essence, that the password cannot be a word that is also in the dictionary. The root user
can change passwords as well. To set the password for a user, root can use passwd
followed by the name of the user whose password needs to be changed. For example,
passwd parag would change the password for user parag. The user root can use the
passwd command in three generic ways. First, you can use it for password
maintenance—to change a password, for example. Second, it can also be used to set
password expiry information, which dictates that a password will expire at a particular
date. Lastly, the passwd command can be used for account maintenance. For example,
an administrator can use passwd to lock an account so that login is disabled
temporarily.
In an environment where many users are using the same server, it is important to
perform some basic account maintenance tasks. These include locking accounts when
they are unneeded for a long time, unlocking an account, and reporting the password
status. An administrator can also force a user to change their password on first use.
To perform these tasks, the passwd command has some options available.
-l Enables an administrator to lock an account. For example, passwd -l rima will lock
the account for user rima.
-n min This rarely used option is applied to set the minimum number of days that a
user must use their password. If this option is not used, a user can change their
password at any time.
-x max This option is used to set the maximum number of days a user can use a
password without changing it.
-c warn When a password is about to expire, you can use this option to send a warning
to the user. The argument for this option specifies the number of days before expiry of
the password that the user will receive the warning.
-i inact Use this option to expire an account automatically when it hasn‘t been used for
a given period of time. The argument for this option is used to specify the exact
duration of this period. Apart from the passwd command, you can also use change to
manage account expiry. Consult the man page for more details on its usage.
If you already know how to create a user, modifying an existing user account is no big
deal. The usermod command is used for this purpose. It employs many of the same
options that are used with useradd. For example, use usermod -g 101 linda to set the
new primary group of user linda to a group with the unique ID 101. The usermod
command has many other options. For a complete overview, consult its man page.
Another command that you will occasionally need is userdel. Use this command to
delete accounts from your server. userdel is a very simple command: userdel linda
deletes user linda from your system, for example. However, if used this way, userdel
will leave the home directory of your user untouched. This may be necessary to ensure
that your company still has access to the work of a user
Configuration Files:
It is the Configuration files (or config files), which are used for user applications,
server processes and operating system settings. For managing the user
environment , a configuration file is also used which sets the default settings. In a
operating system like Unix, many different configuration-file formats does exist.
System- software often uses configuration files stored in the folder /etc, while
user applications often use a "dot file" – a file or directory in the home directory
prefixed with a period. Unix hides such files or directory from casual listing.
/etc/passwd
It is the most important configuration file. /etc/passwd file is the primary
database where user information is stored. That is, the most important user
properties are stored in this file.
User name: The user‘s login name is stored in the first field in /etc/passwd. In
modern Linux distributions, there is no limitation on the length of the login
name. One can have user name of any length.
Password: The passwords are stored in the Encrypted format. And the
passwords are always stored in the configuration file /etc/shadow.
User ID: Every user is given with a unique user ID. For the Red Hat Enterprise
Linux, the starting local user IDs is 500, and the highest user ID to be used is
60000 (the highest numbers are reserved for special-purpose accounts).
Group ID: this field is used to reflect the Id of the primary group every user is
member of. On Red Hat Enterprise Linux, every user is also a member of a
private group that has the name of the user.
User Information: This field is used to include some additional information
about the user. The field can contain any personal information, such as name of
user‘s department, her phone number, or anything else related to the user. This
makes identifying a user easier for an administrator. This is an optional field.
Home Directory: This field points to the directory of the user‘s home directory.
Login Shell: This is last field in /etc/passwd and used to refer to the program
that starts automatically when a user logs in. Most often, this will be /bin/bash.
/etc/shadow
The folder /etc/shadow are also organized in different fields. The first two fields
are the important fields. The first field is used to store the name of the user, and
the second field is used to store the encrypted password. In the encrypted
password field, an ! and an * can be used. If an! is used, login is currently
disabled. If an * is used, it is a system account that can be used to start services,
but that is not allowed for interactive shell login.
/etc/login.defs
The configuration file that relates to the user environment is /etc/login.defs. This
file is used completely in the background. The generic settings are defined in this
configuration file which is responsible for all kinds of information relating to the
creation of users. The variables defined in login.defs will specify the default
values used at time of users creation.
login.defs contains variables that are used when users are created.
Creating Groups:
The system-config-users tool was developed to simplify managing users and groups. To
create a new user, click Add User. This opens the Add New User window in which you
can specify all of the properties you want when creating a new user. It is also easy to add
new groups. Just click Add Group, and you‘ll see a window prompting you for all of the
properties that are needed to add a new group.
2. External Authentication:
An external source of authentication can be an LDAP directory server or an Active
Directory service offered by Windows servers. To use these sources, you have to
configure the server with the system-config-authentication tool or authconfig. After
starting the system-config-authentication tool, you‘ll see two tabs. On the Identity &
Authentication tab, you can specify how authentication should happen. By default,
the tool is set to use local accounts only as the user account database. On the
Advanced Options tab, you can enable advanced authentication methods, such as the
use of a fingerprint reader.
Logging in Using an LDAP Directory Server:
Connecting to an Active Directory Server:
Authentication in process.
Authentication is process used by a server in which the server grants the access to
their information or site.
When a user authenticates to your server, the local user database as defined in the
files /etc/passwd and /etc/shadow is used on a default configuration.
passwd is the file where the user information (like username, user ID, group ID,
location of home directory, login shell, ...) is stored when a new user is created.
Whenever a new user is created, a file called shadow is maintained, where
important information about the password of the user is stored like an encrypted
password of a user, password expiry date, whether or not the passwd has to be
changed, the minimum and maximum time between password changes etc.
To configure authentication against external authentication server, the sssd
service is involved also PAM /etc/nsswitch.conf is used.
For executing this command we must enables authentication server. Of typing
commands like the following, which enables secure LDAPauthentication where
Kerberos is used (all is one command!):
authconfig --enableldap --enableldapauth --
ldapserver=ldap.example.com
--ldapbasedn=dc=example,dc=com --enabletls
--ldaploadcert=http://ldap.example.com/certificate --enablekrb5
--krb5kdc=krb.example.com --krb5realm=examplecom --
update
1.SSSD:
# ldap_user_object_class = user
# ldap_group_object_class = group
# ldap_user_home_directory = unixHomeDirectory
# ldap_user_principal = userPrincipalName
# ldap_account_expire_policy = ad
# ldap_force_upper_case_realm = true
# krb5_server = your.ad.example.com
# krb5_realm = EXAMPLE.COM
[domain/default]
ldap_id_use_start_tls = False
krb5_realm = EXAMPLE.COM
ldap_search_base = dc=example,dc=com
id_provider = ldap
auth_provider = krb5
chpass_provider = krb5
ldap_uri = ldap://127.0.0.1/
krb5_kpasswd = kerberos.example.com
krb5_kdcip = kerberos.example.com
cache_credentials = True
ldap_tls_cacertdir = /etc/openldap/cacerts
2. nsswitch
It stands for Name Service Switch. The /etc/nsswitch file is used to determine
where different services on a computer are looking for configuration information.
The different sources of information are specified in this file.
The nsswitch.conf file has different fields to maintain. The first field has service
entry consisting of a database name, terminated by a colon, the second field has
list of possible source databases mechanisms. A typical file might look like:
Specifying sources of information in /etc/nsswitch.conf
passwd: files sssd
ethers: files
netmasks: files
networks: files
protocols: files
rpc: files
services: files
netgroup: files
publickey: nisplus
automount: files
PAM basically used for authentication is pluggable. Every modern service that
needs to handle authentication passes through PAM. Every service has its own
configuration file in the directory /etc/pam.d. For instance, the login service uses
the configuration file /etc/pam.d/login.
[root@hnl ~]# cat /etc/pam.d/login
#%PAM-1.0
Setgid bit on regular directory : Setgid used to check whether all files in directory
belongs to group of user. It‘s location is same as location of ‗x‘ for group user. It
is represented by ‗s‘ ( Permission to execute file i.e. x is there for group user) or
‗S‘ (x is not there for group user). In octal permission setgid is represented by ‗2‘.
After executing ‗chmod‘ command to set setgid (symbolic representation):
Access ACL
The user and group access permissions for all kinds of file system objects (files
and directories) are determined by means of access ACLs.
Default ACL
Default ACLs can only be applied to directories. They determine the permissions
a file system object inherits from its parent directory when it is created.
ACL entry
Each ACL consists of a set of ACL entries. An ACL entry contains a type, a qualifier
for the user or group to which the entry refers, and a set of permissions. For
some entry types, the qualifier for the group or users is undefined.
Use of ACL :
From Linux man pages, ACLs are used to define more fine-grained discretionary
access rights for files and directories.
When user creates a file or directory under Linux or UNIX, the file gets created but the
set of permissions to that file is default set of permission. In most cases the system
defaults may be open or relaxed for file sharing purpose. For example, if a text file has
666 permissions, it grants read and write permission to everyone. Similarly a directory
with 777 permissions, grants read, writes, and executes permission to everyone.
When we create a new file or directory, shell automatically assigns the default
permission to it.
Symbolic values
Octal values
0 rw- Rwx
1 rw- rw-
2 r-- r-x
3 r-- r--
4 -w- -wx
5 -w- -w-
6 --x --x
a - append only: this attribute allows a file to be added to, but not to be
removed. It prevents accidental or malicious changes to files that record data,
such as log files.
c - Compressed: it causes the kernel to compress data written to the file
automatically and uncompress it when it’s read back.
i - Immutable: it makes a file immutable. It not only restricts the write access to
the file but also put few more restrictions like, the file can’t be deleted, links to it
can’t be created, and the file can’t be renamed.
j - Data journaling: it ensures that on an Ext3 file system the file is first written to
the journal and only after that to the data blocks on the hard disk.
s - Secure deletion: it makes sure that recovery of a file is not possible after it
has been deleted.
t - No tail-merging: Tail-merging is a process in which small data pieces at a file’s
end that don’t fill a complete block are merged with similar pieces of data from
other files.
u - Undeletable: When a file is deleted, its contents are saved which allows a
utility to be developed that works with that information to salvage deleted files.
A - No atime updates: Linux won’t update the access time stamp when you
access a file.
D - Synchronous directory updates: it makes sure that changes to files are
written to disk immediately and not to cache first.
S - Synchronous updates: the changes on a file are written synchronously on the
disk.
T - top of directory hierarchy: A directory will be deemed to be the top of
directory hierarchies for the purposes of the Orlov block allocator.
The‗chattr‘ is the command that allows a user to set certain attributes of a file.
The ‗lsattr‘ is the command that displays the attributes of a file.
Among other things, the chattr command is useful to make files immutable so
that password files and certain system files cannot be erased during software
upgrades.
chattr +i test.txt
Using ‗–d‘ it gives list of attributes itself instead of files in that directory.
Using ‗–R‘ it gives list of attributes recursively, shows the subdirectories as well.
Unit 5: TCP/IP Networking and Network File System
5.1 Learning Objectives
5.2 Introduction
5.3 TCP/IP Networking:
5.4 Understanding Network Classes
5.5 Setting Up a Network Interface Card (NIC),
5.6 Understanding Subnetting,
5.7 Working with Gateways and Routers,
5.8 Configuring Dynamic Host Configuration Protocol,
5.9 Configuring the Network Using the Network
5.10 The Network File System:
5.11 NFS Overview,
5.12 Planning an NFS Installation,
5.13 Configuring an NFS Server,
5.14 Configuring an NFS Client,
5.15 Using Automount Services,
5.16 Examining NFS Security
5.17 Self-Test (Multiple Choice Questions)
5.18 Summary
5.19 Exercise (short answer questions)
5.20 References
5.1.1
5.1 Learning Objectives
Analyze the requirements for a given organizational structure to select the most
appropriate class address.
Configure DHCP and NFS.
5.2 Introduction
TCP/IP stands for Transmission Control Protocol/Internet Protocol, and refers to a family of
protocols used for computer communications. TCP and IP are just two of the separate protocols
contained in the group of protocols developed by the Department of Defense.
Class A Address:
The first bit of the first octet is always set to 0 (zero). Thus the first octet ranges from 1 – 127, i.e.
Class A addresses only include IP starting from 1.x.x.x to 126.x.x.x only. The IP range 127.x.x.x
is reserved for loopback IP addresses.
The default subnet mask for Class A IP address is 255.0.0.0.
Class B Address
An IP address which belongs to class B has the first two bits in the first octet set to 10, i.e.
Class B IP Addresses range from 128.0.x.x to 191.255.x.x. The default subnet mask for Class B is
255.255.x.x.
Class C Address
The first octet of Class C IP address has its first 3 bits set to 110, that is:
Class C IP addresses range from 192.0.0.x to 223.255.255.x. The default subnet mask for Class C
is 255.255.255.x.
Class D Address
Very first four bits of the first octet in Class D IP addresses are set to 1110, giving a range of:
Class D has IP address range from 224.0.0.0 to 239.255.255.255. Class D is reserved for
Multicasting. In multicasting data is not destined for a particular host, that is why there is no need
to extract host address from the IP address, and Class D does not have any subnet mask.
Class E Address
This IP Class is reserved for experimental purposes only for R&D or Study. IP addresses in this
class ranges from 240.0.0.0 to 255.255.255.254. Like Class D, this class too is not equipped with
any subnet mask.
There are a few ways to assign IP addresses to the devices depending on the purpose of the
network. If the network is internal, an intranet, not connected to an outside network, any class A,
B, or C network number can be used. The only requirement is choosing a class that allows for the
number of hosts to be connected. Although this is possible, in the real world this approach would
not allow for connecting to the Internet.
To configure a network card use the same command, ifconfig, but this time use the name ‗eth0‘
for an Ethernet device. You also need to know the IP address, the netmask, and the broadcast
addresses. These numbers vary depending on the type of network being built.
In this example, you configure an Ethernet interface for an internal network. You need to issue
the command:
ifconfig eth0 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255
Another way to configure network is using GUI. In setting you will find network configuration.
Figure 2 shows network configuration.
The use of a CIDR-notated address is the same as for a Class address. Class addresses can easily
be written in CIDR notation (Class A = /8, Class B = /16, and Class C = /24).
It is currently almost impossible for you, as an individual or company, to be allocated your own
IP address blocks. You will be told simply to get them from your ISP. The reason for this is the
ever-growing size of the Internet routing table. Just five years ago, there were less than 5,000
network routes in the entire Internet. Today, there are over 100,000. Using CIDR, the biggest
ISPs are allocated large chunks of address space, usually with a subnet mask of /19 or even
smaller. The ISP‘s customers, often other, smaller ISPs, are then allocated networks from the big
ISP‘s pool. That way, all the big ISP‘s customers, and their customers, are accessible via one
network route on the Internet.
CIDR will probably keep the Internet happily in IP addresses for the next fewyears at least. After
that, IPv6, with 128 bit addresses, will be needed. Under IPv6,even careless address allocation
would comfortably enable a billion unique IP addresses for every person on earth! The complete
details of CIDR are documented in RFC1519, which was released in September of 1993.
cat /proc/sys/net.ipv4/ip_forward.
If forwarding is not enabled then it returns number 0, and if enabled then number 0. Type the
following command to enable IP forwarding if it is not already enabled:
Each computer on the subnet has to show the IP address for the interface that is its gateway to the
other network. The computers on the first subnet, the 192.168.1.0 network, would have the
gateway 192.168.1.1. Remember that you used the first IP address on this network for the
gateway computer. The computers on the second subnet, 192.168.1.128, would use 192.168.1.129
as the gateway address. You can add this information using the route command as follows:
default-lease-time 36000; (The amount of time in seconds that the host can keep the IP address.)
max-lease-time 100000; (The maximum time the host can keep the IP address.)
#domain name
option domain-name ―tactechnology.com‖; (The domain of the DHCPserver.)
#nameserver
option domain-name-servers 192.168.1.1; (The IP address of the DNSservers.)
#gateway/routers, can pass more than one:
option routers 1.2.3.4,1.2.3.5;
option routers 192.168.1.1; (IP address of routers.)
#netmask
option subnet-mask 255.255.255.0; (The subnet mask of the network.)
#broadcast address
option broadcast-address 192.168.1.255; (The broadcast address ofthe network.)
#specify the subnet number gets assigned in
subnet 192.168.1.0 netmask 255.255.255.0 (The subnet that uses thedhcp server.)
#define which addresses can be used/assigned
range 192.168.1.1 192.168.1.126; (The range of IP addresses that canbe used.)
To start the server, run the command dhcpd. To ensure that the dhcpd program runs whenever the
system is booted, you should put the command in one of your init scripts.
First you need to check if the dhcp client is installed on your system. You can check for it by
issuing the following command:
whichdhcpcd
If the client is on your system, you will see the location of the file. If the file is not installed, find
it on Red Hat Installation CD 1. Use the rpm command to install the client. After you install the
client software, start it by running the command dhcpcd. Each of your clients will now receive its
IP address, subnet mask, gateway, and broadcast address from your dhcp server. Since you want
this program to run every time the computer boots, you need to place it in the /etc/rc.local file.
Now whenever the system starts, this daemon will be loaded.
5.9 Configuring the Network Using the Network
Now that you know how to work with services in Red Hat Enterprise Linux, it‘s time to get
familiar with Network Manager. The easiest way to configure the network is by clicking the
Network Manager icon on the graphical desktop of your server. In this section, you‘ll learn how
to set network parameters using the graphical tool. You can find the Network Manager icon in the
upper-right corner of the graphical desktop. If you click it, it provides an overview of all currently
available network connections, including Wi-Fi networks to which your server is not connected.
This interface is convenient if you‘re using Linux on a laptop that roams from one Wi-Fi network
to another, but it‘s not as useful for servers. If you right-click the Network Manager icon, you can
select Edit Connections to set the properties for your server‘s network connections. You‘ll find all
of the wired network connections on the Wired tab. The name of the connection you‘re using
depends on the physical location of the device. Whereas in older versions of RHEL names like
eth0 andeth1 were used, Red Hat Enterprise Linux 6.2 and newer uses device-dependent names
likep6p1. On servers with many network cards, it can be hard to find the specific device you
need. However, if your server has only one network card installed, it is not that hard. Just select
the network card that is listed on the Wired tab (as shown in below figure).
When you click on the GNOME Shell network connection icon, you are presented with:
a list of categorized networks you are currently connected to (such as Wired and
Wi-Fi);
a list of all Available Networks that Network Manager has detected;
options for connecting to any configured Virtual Private Networks (VPNs); and,
an option for selecting the Network Settings menu entry.
If you are connected to a network, this is indicated by a black bullet on the left of the connection
name.
Click Network Settings. The Network settings tool appears.
5.10 The Network File System:
A Network File System (NFS) allows remote hosts to mount file systems over a network and
interact with those file systems as though they are mounted locally. This enables system
administrators to consolidate resources onto centralized servers on the network.
Based on certain firewall settings, you may need to configure the NFS daemon processes to use
specific networking ports. The NFS server settings allows you to specify the ports for each
process instead of using the random ports assigned by the portmapper. You can set the NFS
Server settings by clicking on the Server Settings button. The figure below illustrates the NFS
Server Settings window.
Figure: NFS Server Settings
Exporting or Sharing NFS File Systems:
Sharing or serving files from an NFS server is known as exporting the directories. The NFS
Server Configuration Tool can be used to configure a system as an NFS server.
To add an NFS share, click the Add button. The dialog box shown in Figure 18.3, ―Add Share‖
appears.
The Basic tab requires the following information:
Directory — Specify the directory to share, such as /tmp.
Host(s) — Specify the host(s) with which to share the directory. Refer to Section
18.6.3, ―Hostname Formats‖ for an explanation of possible formats.
Basic permissions — Specify whether the directory should have read-only or
read/write permissions.
An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab
file. At boot time the /etc/fstab file is referenced by the netfs service, so lines referencing NFS
shares have the same effect as manually typing the mount command during the boot process.
Each line in this file must state the hostname of the NFS server, the directory on the server being
exported, and the directory on the local machine where the NFS share is to be mounted. To
modify the /etc/fstab file you must be root.
The most commonly used and useful NFS-specific mount options arersize=8192, wsize=8192,
hard, intr, and no lock. Increasing the default size of the NFS read and write buffers improves
NFS‘s performance. The suggested value is8192 bytes, but you might find that you get better
performance with larger or smaller values. The no lock option can also improve performance
because it eliminates the overhead of file locking calls, but not all servers support file locking
over NFS.
NFS client requires the portmap daemon to process and route RPC calls and returns from the
server to the appropriate port and programs. It is important that the portmapper is running on the
client system using the portmap initialization script, /etc/rc.d/init.d/portmap. If you want to use
NFS file locking, an NFS server and any NFS clients need to run statd and lockd. For this use the
initialization script, /etc/rc.d/init.d/nfslock.After configuring the mount table and starting the
requisite daemons, last step is to mount the file systems. To mount /home from the server
configured at the end of the previous section, execute the following command as root:
Apart from performance degradation, you might encounter other problems with NFS that require
resolution like, attempt to access a file to which NFS client does not have access, timing out etc.
Now we will see the issues in details. When the NFS setattr call fails because an NFS clients
attempting to access a file to which it does not have access. This message is harmless, but we can
conclude that many such log entries might indicate a systematic attempt to compromise the
system. The most common message occurs when older NFS startup scripts try to start newer
versions of rpc.lockdmanually ,is the rpc.lockd startup failure message. To avoid this failure
message edit the startup scripts and remove statements that attempt to start lockd manually.
If you transfer very large files via NFS, and NFS consumes all of the available CPU cycles,
causing the server to respond at a glacial pace, you are probably running an older version of the
kernel that has problems with the fsync call that accumulates disk syncs before flushing the
buffers. This issue is reportedly fixed in the 2.4 kernel series, so upgrading the kernel may solve
the problem.
Example NFS client
The NFS server configured in the previous section exported /home and /usr/local,so I will
demonstrate configuring an NFS client that mounts those directories.
Clients that want to use both exports need to have the following entries in
/etc/fstab:
/usr/local nfs
rsize=8192,wsize=8192,hard,intr,nolock 0 0
luther:/home /home nfs
rsize=8192,wsize=8192,hard,intr,nolock 0 0
Start the portmapper using the following command:
# /etc/rc.d/init.d/portmap start
Starting portmapper: [ OK ]
Mount the exports using one of the following commands:
# mount –a –t nfs
or
# mount /home /usr/local
The first command mounts all (-a) directories of type nfs (-t nfs). The second command mounts
only the file systems /home and /usr/local. Verify that the mounts completed successfully by
attempting to access files on each file system. If everything works as designed, you are ready to
go. Otherwise, read the section titled ―Troubleshooting NFS‖ for tips and suggestions for solving
common NFS problems.
5.15 Using Automount Services
In some cases, putting your NFS mounts in /etc/fstab works just fine. In other cases, it
doesn‘t work well, and you‘ll need a better way to mount NFS shares. An example of such a
scenario is a network where users are using OpenLDAP to authenticate, after which they get
access to their home directories. To make sure users can log in on different workstations and still
get access to their home directory, you can‘t just put an NFS share in /etc/fstab for each user.
Automount is a service that mounts NFS shares automatically. To configure it, you‘ll need to take
care of three different steps:
Start the autofs service.
Edit the /etc/auto.master file.
Create an indirect file to specify what you want Automount to do.
The central configuration file in Automount is /etc/auto.master.
Sample /etc/auto.master
[root@hnl ~]# cat /etc/auto.master
#
# Sample auto.master file
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# For details of the format look at autofs(5)
#
/misc /etc/auto.misc
#
# NOTE: mounts done from a hosts map will be mounted with the
# "nosuid" and "nodev" options unless the "suid" and "dev"
# options are explicitly given.
#
/net -hosts
#
# Include central master map if it can be found using
# nsswitch sources.
#
# Note that if there are entries for /net or /misc (as
# above) in the included master map any keys that are the
# same will not be seen as the first read key seen takes
# precedence.
#
+auto.master
[root@hnl ~]#
5.18 Summary
In this chapter we have discussed various topics related to TCP/IP networking like Network
Classes, subnetting, gateways, routers and Configuring Dynamic Host Configuration Protocol
In second part of this chapter we introduced the Network File System, installation process,
configuration of NFS server and client. In continuation with this we have examined NFS security.
5.20 References
1. Red Hat® Linux® Networking and System Administration, Terry Collings and
Kurt Wall
2. Red Hat ® Enterprise Linux® 6 Administration, Sander van Vugt
3. https://access.redhat.com/documentation/en-us/
Unit 6: Configuring DNS and DHCP
Unit 6: Configuring DNS and DHCP
6.1 Learning Objectives
6.14 Summary
6.16 References
Configure DNS
Configure DHCP
6.2 Introduction
DNS associates hostnames with their respective IP addresses, so that when users want to connect
to other machines on the network, they can refer to them by name, without having to remember
IP addresses.
Use of DNS and FQDNs also has advantages for system administrators, allowing the flexibility to
change the IP address for a host without affecting name-based queries to the machine.
Conversely, administrators can shuffle which machines handle a name-based query.
DNS is normally implemented using centralized servers that are authoritative for some domains
and refer to other DNS servers for other domains.
When a client host requests information from a nameserver, it usually connects to port 53. The
nameserver then attempts to resolve the FQDN based on its resolver library, which may contain
authoritative information about the host requested or cached data from an earlier query. If the
nameserver does not already have the answer in its resolver library, it queries other nameservers,
called root nameservers, to determine which nameservers are authoritative for the FQDN in
question. Then, with that information, it queries the authoritative nameservers to determine the IP
address of the requested host. If a reverse lookup is performed, the same procedure is used,
except that the query is made with an unknown IP address rather than a name.
Domain Names
The domain name represents an entity's position within the structure of the DNS hierarchy. A
domain name is simply a list of all domains in the path from the local domain to the root. Each
label in the domain name is delimited by a period. For example, the domain name for the
Providence domain within Company A is providence.companya.com, as shown in Domains and
Subdomains and the list below.
Note that the domain names in the figure end in a period, representing the root domain. Domain
names that end in a period for root are called fully qualified domain names (FQDNs).Each
computer that uses DNS is given a DNS hostname that represents the computer's position within
the DNS hierarchy. Therefore, the hostname for host1 in Figure 6.4.2 is
host1.washington.companya.com.
Path Description
/etc/named.conf The main configuration file.
An auxiliary directory for configuration files
/etc/named/ that are included in the main configuration
file.
Table 6.8.1: ―The named Service Configuration Files‖.
The configuration file consists of a collection of statements with nested options surrounded by
opening and closing curly brackets ({ and }). Note that when editing the file, you have to be
careful not to make any syntax error, otherwise the named service will not start. A typical
/etc/named.conf file is organized as follows:
statement-1 ["statement-1-name"] [statement-1-class] {
option-1;
option-2;
option-N;
};
statement-2 ["statement-2-name"] [statement-2-class] {
option-1;
option-2;
option-N;
};
statement-N ["statement-N-name"] [statement-N-class] {
option-1;
option-2;
option-N;
};
If you have installed the bind-chroot package, the BIND service will run in the chroot
environment. In that case, the initialization script will mount the above configuration files using
the mount --bind command, so that you can manage the configuration outside this environment.
There is no need to copy anything into the /var/named/chroot/ directory because it is mounted
automatically. This simplifies maintenance since you do not need to take any special care of
BIND configuration files if it is run in a chroot environment. You can organize everything as you
would with BIND not running in a chroot environment.
The following directories are automatically mounted into the /var/named/chroot/ directory if the
corresponding mount point directories underneath /var/named/chroot/ are empty:
/etc/named
/etc/pki/dnssec-keys
/run/named
/var/named
/usr/lib64/bind or /usr/lib/bind (architecture dependent).
The following files are also mounted if the target file does not exist in /var/named/chroot/:
/etc/named.conf
/etc/rndc.conf
/etc/rndc.key
/etc/named.rfc1912.zones
/etc/named.dnssec.keys
/etc/named.iscdlv.key
/etc/named.root.key
Configuring a cache-only name server isn‘t difficult. You just need to install the BIND service
and make sure that it allows incoming traffic. For cache-only name servers, it also makes sense to
configure a forwarder.
1. Open a terminal, log in as root, and run yum -y install bind-chroot on the host
computer to install the bind package.
2. With an editor, open the configuration file /etc/named.conf. Listing 14.1 shows a
portion of this configuration file. You need to change some parameters in the
configuration file to have BIND offer its services to external hosts.
3. Change the file to include the following parameters: listen-on port 53 { any; }; and
allow-query { any; };. This opens your DNS server to accept queries on any
network inter face from any client.
4. Still in /etc/named.conf, change the parameter dnssec-validation; to dnsserver-
validation no;.
5. Finally, insert the line forwarders x.x.x.x in the same configuration file, and give it
the value of the IP address of the DNS server you normally use for your Internet
connection. This ensures that the DNS server of your Internet provider is used for
DNS recursion and that requests are not sent directly to the name servers of the
root domain.
6. Use the service named restart command to restart the DNS server.
7. From the RHEL host, use dig redhat.com. You should get an answer, which is sent
by your DNS server. You can see this in the SERVER line in the dig response.
Congratulations, your cache-only name server is operational!
Example named.conf:
[root@rhev ~]# cat /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
options {
listen-on port 53 { any; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
forwarders { 8.8.8.8; };
recursion yes;
dnssec-enable yes;
dnssec-validation no;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channeldefault_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
Once you have allowed updates on the primary server, you need to configure the slave.This
means that in the /etc/named.rfc1912.conf file on the Red Hat server, which you‘regoing to use as
DNS slave, you also need to define the zone. The example configuration inlisting 6.9.2 will do
that for you.
DHCP can be implemented on networks ranging in size from home networks to large campus
networks and regional Internet service provider networks. A router or a residential gateway can
be enabled to act as a DHCP server.
The DHCP operates based on the client–server model. When a computer or other device connects
to a network, the DHCP client software sends a DHCP broadcast query requesting the necessary
information. Any DHCP server on the network may service the request. The DHCP server
manages a pool of IP addresses and information about client configuration parameters such as
default gateway, domain name, the name servers, and time servers. On receiving a DHCP request,
the DHCP server may respond with specific information for each client, as previously configured
by an administrator, or with a specific address and any other information valid for the entire
network and for the time period for which the allocation (lease) is valid. A DHCP client typically
queries for this information immediately after booting, and periodically thereafter before the
expiration of the information. When a DHCP client refreshes an assignment, it initially requests
the same parameter values, but the DHCP server may assign a new address based on the
assignment policies set by administrators.
DHCP clients obtain a DHCP lease for an IP address, a subnet mask, and various DHCP options
from DHCP servers in a four-step process:
DHCPDISCOVER:
The client broadcasts a request for a DHCP server.
DHCPOFFER:
DHCP servers on the network offer an address to the client.
DHCPREQUEST:
The client broadcasts a request to lease an address from one of the offering DHCP servers.
DHCPACK:
The DHCP server that the client responds to acknowledges the client, assigns it any configured
DHCP options, and updates its DHCP database. The client then initializes and binds its TCP/IP
protocol stack and can begin network communication.
Here are the most relevant parameters from the dhcpd.conffile and a short explanation of each:
option domain-name Use this to set the DNS domain name for the DHCP clients.
option domain-name-servers: This specifies the DNS name servers that should be used.
default-lease-time :This is the default time in seconds that a client can use the IPaddress that it
has received from the DHCP server.
max-lease-time :This is the maximum time that a client can keep on using its assigned IP address.
If within the max-lease-time timeout it hasn‘t been able to contact the DHCPserver for renewal,
the IP address will expire, and the client can‘t use it anymore.
log-facility :This specifies which syslog facility the DHCP server uses.
Subnet: This is the essence of the work of a DHCP server. The subnet definition specifies the
network on which the DHCP server should assign IP addresses. A DHCP server can serve
multiple subnets, but it is common for the DHCP server to be directly connected to the subnet it
serves.
range :This is the range of IP addresses within the subnet that the DHCP server can assign to
clients.
option routers :This is the router that should be set as the default gateway.
As you see from the sample DHCP configuration file, there are many options that an
administrator can use to specify different kinds of information that should be handed out. Some
options can be set globally and also in the subnet, while other options are set in specific subnets.
As an administrator, you need to determine where you want to set specific options.
Apart from the subnet declarations that you make on the DHCP server, you can also define the
configuration for specific hosts. In the example file in Listing6.13.1, you can see this in the host
declarations for host passacaglia and host fantasia. Host declarations will work based on the
specification of the hardware Ethernet address of the host; this is the MAC address of the network
card where the DHCP request comes in.
At the end of the example configuration file, you can also see that a class is defined, as well as a
shared network in which different subnets and pools are used. The idea is that you can use the
class to identify a specific host. This works on the basis of the vendor class identifier, which is
capable of identifying the type of host that sends a DHCP request. Once a specific kind of host is
identified, you can match it to a class and, based on class membership, assign specific
configuration that makes sense for that class type only.
At the end of the example dhcpd.conf configuration file, you can see that, on a shared network,
two different subnets are declared where all members of the class for are assigned to one of the
subnets and all others are assigned to the other class.
NETWORKING=yes
The NETWORKING variable must be set to yes if you want networking to start at boot time.
The /etc/sysconfig/network-scripts/ifcfg-eth0 file should contain the following lines:
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
6.17 References
1. Red Hat® Linux® Networking and System Administration, Terry Collings and Kurt
Wall
2. Red Hat ® Enterprise Linux® 6 Administration, Sander van Vugt
3. https://access.redhat.com/documentation/en-us/
Unit 7
Connecting to Microsoft Networks and Setting up a Mail
Server
Samba
Samba is the standard Windows interoperability suite of programs for Linux and Unix.
Samba is Free Software licensed under the GNU General Public License, the Samba
project is a member of the Software Freedom Conservancy.
Since 1992, Samba has provided secure, stable and fast file and print services for all
clients using the SMB/CIFS protocol, such as all versions of DOS and Windows, OS/2,
Linux and many others.
Samba is an important component to seamlessly integrate Linux/Unix Servers and
Desktops into Active Directory environments. It can function both as a domain controller
or as a regular domain member.
Samba is a software package that gives network administrators flexibility and freedom in
terms of setup, configuration, and choice of systems and equipment. Because of all that it
offers, Samba has grown in popularity, and continues to do so, every year since its release
in 1992.
A lot of emphasis has been placed on peaceful coexistence between Unix and Windows.
The Usenix Association has even created an annual conference (LISA/NT--July 14-17,
1999) around this theme. Unfortunately, the two systems come from very different
cultures and they have difficulty getting along without mediation. ...and that, of course, is
Samba's job. Samba runs on Unix platforms, but speaks to Windows clients like a native.
It allows a Unix system to move into a Windows "Network Neighborhood" without
causing a stir. Windows users can happily access file and print services without knowing
or caring that those services are being offered by a Unix host.
All of this is managed through a protocol suite which is currently known as the "Common
Internet File System", or CIFS. This name was introduced by Microsoft, and provides
some insight into their hopes for the future. At the heart of CIFS is the latest incarnation
of the Server Message Block (SMB) protocol, which has a long and tedious history.
Samba is an open source CIFS implementation, and is available for free from the
http://samba.org/ mirror sites.
Samba and Windows are not the only ones to provide CIFS networking. OS/2 supports
SMB file and print sharing, and there are commercial CIFS products for Macintosh and
other platforms (including several others for Unix). Samba has been ported to a variety of
non-Unix operating systems, including VMS, AmigaOS, & NetWare. CIFS is also
supported on dedicated file server platforms from a variety of vendors. In other words,
this stuff is all over the place.
History
It started a long time ago, in the early days of the PC, when IBM and Sytec co-developed
a simple networking system designed for building small LANs. The system included
something called NetBIOS, or Network Basic Input Output System. NetBIOS was a
chunk of software that was loaded into memory to provide an interface between programs
and the network hardware. It included an addressing scheme that used 16-byte names to
identify workstations and network-enabled applications. Next, Microsoft added features
to DOS that allowed disk I/O to be redirected to the NetBIOS interface, which made disk
space sharable over the LAN. The file-sharing protocol that they used eventually became
known as SMB, and now CIFS.
Lots of other software was also written to use the NetBIOS API (Application
Programmer's Interface), which meant that it would never, ever, ever go away. Instead,
the workings beneath the API were cleverly gutted and replaced. NetBEUI (NetBIOS
Enhanced User Interface), introduced by IBM, provided a mechanism for passing
NetBIOS packets over Token Ring and Ethernet. Others developed NetBIOS LAN
emulation over higher-level protocols including DECnet, IPX/SPX and, of course,
TCP/IP.
NetBIOS and TCP/IP made an interesting team. The latter could be routed between
interconnected networks (internetworks), but NetBIOS was designed for isolated LANs.
The trick was to map the 16-byte NetBIOS names to IP addresses so that messages could
actually find their way through a routed IP network. A mechanism for doing just that was
described in the Internet RFC1001 and RFC1002 documents. As Windows evolved,
Microsoft added two additional pieces to the SMB package. These were service
announcement, which is called "browsing", and a central authentication and authorization
service known as Windows NT Domain Control.
More Systems
Andrew Tridgell, who is Australian, had a bit of a problem. He needed to mount disk
space from a Unix server on his DOS PC. Actually, this wasn't the problem at all because
he had an NFS (Network File System) client for DOS and it worked just fine.
Unfortunately, he also had an application that required the NetBIOS interface. Anyone
who has ever tried to run multiple protocols under DOS knows that it can be...er...quirky.
So Andrew chose the obvious solution. He wrote a packet sniffer, reverse engineered the
SMB protocol, and implemented it on the Unix box. Thus, he made the Unix system
appear to be a PC file server, which allowed him to mount shared filesystems from the
Unix server while concurrently running NetBIOS applications. Andrew published his
code in early 1992. There was a quick, but short succession of bug-fix releases, and then
he put the project aside. Occasionally he would get E'mail about it, but he otherwise
ignored it. Then one day, almost two years later, he decided to link his wife's Windows
PC with his own Linux system. Lacking any better options, he used his own server code.
He was actually surprised when it worked.
Through his E'mail contacts, Andrew discovered that NetBIOS and SMB were actually
(though nominally) documented. With this new information at his fingertips he set to
work again, but soon ran into another problem. He was contacted by a company claiming
trademark on the name that he had chosen for his server software. Rather than cause a
fuss, Andrew did a quick scan against a spell-checker dictionary, looking for words
containing the letters "smb". "Samba" was in the list. Curiously, that same word is not in
the dictionary file that he uses today. (Perhaps they know it's been taken.)
The Samba project has grown mightily since then. Andrew now has a whole team of
programmers, scattered around the world, to help with Samba development. When a new
release is announced, thousands of copies are downloaded within days. Commercial
systems vendors, including Silicon Graphics, bundle Samba with their products. There
are even Samba T-shirts available. Perhaps one of the best measures of the success of
Samba is that it was listed in the "Halloween Documents", a pair of internal Microsoft
memos that were leaked to the Open Source community. These memos list Open Source
products which Microsoft considers to be competitive threats. The absolutely best
measure of success, though, is that Andrew can still share the printer with his wife.
Samba consists of two key programs, plus a bunch of other stuff that we'll get to later.
The two key programs are smbd and nmbd. Their job is to implement the four basic
modern-day CIFS services, which are:
File & print services
Authentication and Authorization
Name resolution
Service announcement (browsing)
File and print services are, of course, the cornerstone of the CIFS suite. These are
provided by smbd, the SMB Daemon. Smbd also handles "share mode" and "user mode"
authentication and authorization. That is, you can protect shared file and print services by
requiring passwords. In share mode, the simplest and least recommended scheme, a
password can be assigned to a shared directory or printer (simply called a "share"). This
single password is then given to everyone who is allowed to use the share. With user
mode authentication, each user has their own username and password and the System
Administrator can grant or deny access on an individual basis.
The NT Domain system deserves special mention because, until the release of Samba
version 2, only Microsoft owned code to implement the NT Domain authentication
protocols. With version 2, Samba introduced the first non-Microsoft-derived NT Domain
authentication code. The eventual goal, of course, it to completely mimic a Windows NT
Domain Controller.
The other two CIFS pieces, name resolution and browsing, are handled by nmbd. These
two services basically involve the management and distribution of lists of NetBIOS
names.
Name resolution takes two forms: broadcast and point-to-point. A machine may use either
or both of these methods, depending upon its configuration. Broadcast resolution is the
closest to the original NetBIOS mechanism. Basically, a client looking for a service
named Trillian will call out "Yo! Trillian! Where are you?", and wait for the machine with
that name to answer with an IP address. This can generate a bit of broadcast traffic (a lot
of shouting in the streets), but it is restricted to the local LAN so it doesn't cause too
much trouble.
The other type of name resolution involves the use of an NBNS (NetBIOS Name Service)
server. (Microsoft called their NBNS implementation WINS, for Windows Internet Name
Service, and that acronym is more commonly used today.) The NBNS works something
like the wall of an old fashioned telephone booth. (Remember those?) Machines can
leave their name and number (IP address) for others to see.
It works like this: The clients send their NetBIOS names & IP addresses to the NBNS
server, which keeps the information in a simple database. When a client wants to talk to
another client, it sends the other client's name to the NBNS server. If the name is on the
list, the NBNS hands back an IP address. You've got the name, look up the number.
Clients on different subnets can all share the same NBNS server so, unlike broadcast, the
point-to-point mechanism is not limited to the local LAN. In many ways the NBNS is
similar to the DNS, but the NBNS name list is almost completely dynamic and there are
few controls to ensure that only authorized clients can register names. Conflicts can, and
do, occur fairly easily.
Finally, there's browsing. This is a whole 'nother kettle of worms, but Samba's nmbd
handles it anyway. This is not the web browsing we know and love, but a browsable list
of services (file and print shares) offered by the computers on a network.
On a LAN, the participating computers hold an election to decide which of them will
become the Local Master Browser (LMB). The "winner" then identifies itself by claiming
a special NetBIOS name (in addition to any other names it may have). The LMBs job is
to keep a list of available services, and it is this list that appears when you click on the
Windows "Network Neighborhood" icon.
In addition to LMBs, there are Domain Master Browsers (DMBs). DMBs coordinate
browse lists across NT Domains, even on routed networks. Using the NBNS, an LMB
will locate its DMB to exchange and combine browse lists. Thus, the browse list is
propagated to all hosts in the NT Domain. Unfortunately, the synchronization times are
spread apart a bit. It can take more than an hour for a change on a remote subnet to
appear in the Network Neighborhood.
Relative to Samba, there are several options for handling username and password
issues in heterogeneous environments. Some of these are:
The Linux Pluggable Authentication Modules (PAM) - Allows you to
authenticate users against a PDC. This means you still have two user lists—one
local and one on the PDC—but your users need only keep track of their
passwords on the Windows system.
Samba as a PDC - Allows you to keep all your logins and passwords on
the Linux system, while all your Windows boxes authenticate with Samba.
When Samba is used with a Lightweight Directory Access Protocol (LDAP)
back-end for this, you will have a scalable and extensible solution.
Roll your own solution using Perl - Allows you to use your own custom script.
For sites with a well-established system for maintaining logins and passwords, it
isn‘t unreasonable to come up with a custom script. This can be done
using WinPerl and Perl modules that allow changes to the Security Access
Manager (SAM) to update the PDC‘s password list. A Perl script on the
Linux side can communicate with the WinPerl script to keep accounts
synchronized.
In the worst-case situation, you can always maintain the username and password
databases of the different platforms by hand (which some early system admins did
indeed have to do!), but this method is error-prone and not much fun to manage.
Encrypted Passwords
Samba Daemons
Precompiled binaries for Samba exist for most Linux distributions. This section will
show how to install Samba via Red Hat Package Manager (RPM) on a Fedora
distribution. To provide the server-side services of Samba, three packages are needed on
Fedora and RedHat Enterprise Linux (RHEL)–type systems. They are
samba*.rpm - This package provides an SMB server that can be used to provide
network services to SMB/CIFS clients.
samba-common*.rpm -This package provides files necessary for both the
server and client packages of Samba—files such as configuration files, log files,
man pages, PAM modules, and other libraries.
samba-client*.rpm - It provides the SMB client utilities that allow access to
SMB shares and printing services on Linux and non-Linux-type system. The
package is used on Fedora, OpenSuSE, and other RHEL-type systems.
Assuming you have a working connection to the Internet, installing Samba can be as
simple as issuing this command:
The essential components of the Samba software on Debian-like distros, such as Ubuntu,
are split into samba*.deb and samba-common*.deb packages. Getting the client and
server components of Samba installed in Ubuntu is easy as running the following
apt-get command:
yyang@ubuntu-serverA:~$ sudo apt-get -y install samba
As with installing most other services under Ubuntu, the installer will automatically start
the Samba daemons after installation.
Samba comes prepackaged in binary format on most Linux distributions. Since its
inception, Samba has had users across many different UNIX/Linux platforms and so has
been designed to be compatible with the many variants. There is rarely a problem during
the compilation process.
As of this writing, the latest version of Samba was 3.2.0. You should therefore remember
to change all references to the version number (3.2.0) in the following steps to suit the
version you are using.
Begin by downloading the Samba source code from www.samba.org into the directory
where you want to compile it. For this example, we‘ll assume this directory is
/usr/local/src. You can download the latest version directly from
http://us4.samba.org/samba/ftp/samba-latest.tar.gz.
SAMBA ADMINISTRATION
This section describes some typical Samba administrative functions. We‘ll see how to
start and stop Samba, how to do common administrative tasks with SWAT, and how to
use smbclient. Finally, we‘ll examine the process of using encrypted passwords.
Most distributions of Linux have scripts and programs that will start and stop
Samba without your needing to do anything special. They take care of startup at
boot time and stopping at shutdown. On our sample system running Fedora with Samba
installed via RPM, the service command and the chkconfig utility can be used to
manage Samba‘s startup and shutdown.
For example, to start the smbd daemon, you can execute this command:
[root@serverA ~]# service smb status
And to stop the service, type
[root@serverA ~]# service smb stop
After making any configuration changes to Samba, you can restart it with this command
to make the changes go into effect:
[root@serverA ~]# service smb restart
The smb service on Fedora will not automatically start up with the next system reboot.
You can configure it to start up automatically using the chkconfig utility, like so:
[root@serverA ~]# chkconfig smb on
Starting the Samba that we installed from source earlier can be done from the command
line with this command:
USING SWAT
As mentioned, SWAT is the Samba Web Administration Tool, with which you can
manage Samba through a browser interface. It‘s an excellent alternative to editing the
Samba configuration files (smb.conf and the like) by hand.
Prior to version 2.0 of Samba, the official way to configure it was by editing the
smb.conf file. Though verbose in nature and easy to understand, this file was rather
cumbersome to deal with because of its numerous options and directives. Having
to edit text files by hand also meant that setting up shares under Microsoft Windows was
still easier than setting up shares with Samba. Some individuals developed graphical
front-ends to the editing process. Many of these tools are still being maintained and
enhanced. As of version 2.0, however, the source for Samba ships with SWAT.
The SWAT software is packaged separately on Fedora and RHEL systems. The binary
RPM that provides SWAT is named samba-swat. In this section, we‘ll install the RPM
for SWAT using the Yum program.
Setting Up SWAT
What makes SWAT a little different from other browser-based administration tools
is that it does not rely on a separate web server (like Apache). Instead, SWAT performs
all the needed web server functions without implementing a full web server.
Setting up SWAT is pretty straightforward. Here are the steps:
Upon entering this URL, you will be prompted for a username and password with which
to log into SWAT. Type root as the username and type root‘s password. Upon
successfully logging in, you will be presented with a web page similar to the one
in Figure 7.1.
And that is pretty much all there is to installing and enabling SWAT on a Fedora
system.
When you connect to SWAT and log in as root, you‘ll see the main menu shown
in Figure 7.1. From here, you can find almost all the documentation you‘ll need
for Samba‘s configuration files, daemons, and related programs. None of the links point
to external web sites, so you can read them at your leisure without connecting to the
Net. At the top of SWAT‘s main page are buttons for the following menu choices:
Home The main menu page
Globals Configuration options that affect all operational aspects of Samba
Shares For setting up disk shares and their respective options
Printers For setting up printers
Wizard This will initiate a Samba configuration wizard that will walk you
through setting up the Samba server
Status The status of the smbd and nmbd processes, including a list of all
clients connected to these processes and what they are doing (the same
information that‘s listed in the smbstatus command-line program)
View The resulting smb.conf file
Password Password settings
Globals
The Globals page lists the settings that affect all aspects of Samba‘s operation. These
settings are divided into five groups: base, security, logging, browse, and WINS. To the
left of each option is a link to the relevant documentation for the setting and its values.
Shares
In Microsoft Windows, setting up a share can be as simple as selecting a folder (or
creating a new one), right-clicking it, and allowing it to be shared. Additional controls
can be established by right-clicking the folder and selecting Properties.Using SWAT,
these same actions are accomplished by creating a new share. You can then select the
share and click Choose Share. This brings up all the configurable parameters for the
share.
Printers
The Printers page for SWAT lets you configure Samba-related setting for printers that are
currently available on the system. Through a series of menus, you can add printer shares,
delete them, modify them, etc. The one thing you cannot do here is add printers to the
main system—you must do that by some other means.
Status
The Status page shows the current status of the smbd and nmbd daemons. This
information includes what clients are connected and their actions. The page
automatically updates every 30 seconds by default, but you can change this rate if
you like (it‘s an option on the page itself). Along with status information, you can turn
Samba on and off or ask it to reload its configuration file. This is necessary if you make
any changes to the configuration.
View
As you change your Samba configuration, SWAT keeps track of the changes and figures
out what information it needs to put into the smb.conf file. Open the View page, and you
can see the file SWAT is putting together for you.
Password
Use the Password page if you intend to support encrypted passwords. You‘ll want
to give your users a way to modify their own passwords without having to log
into the Linux server. This page allows users to do just that.
CREATING A SHARE
We will walk through the process of creating a share under the /tmp directory to
be shared on the Samba server. We‘ll first create the directory to be shared and
then edit Samba‘s configuration file (/etc/samba/smb.conf) to create an entry for the
share.
Of course, this can be done easily using SWAT‘s web interface, which was installed
earlier, but we will not use SWAT here. SWAT is easy and intuitive to use. But it is
probably useful to understand how to configure Samba in its rawest form, and this will
also make it easier to understand what SWAT does in its back-end so that you
can tweak things to your liking. Besides, one never knows when one might be
stranded in the Amazon jungle without any nice graphical user interface (GUI)
configuration tools available. So let‘s get on with it:
Line 1 is the name of the share (or ―service‖ in Samba parlance). This is the name
that SMB clients will see when they try to browse the shares stored on the Samba
server.
Line 2 is just a descriptive/comment text that users will see next to a share when
browsing.
Line 3 is important. It specifies the location on the file system that stores the
actual content to be shared.
Line 4 specifies that no password is required to access the share (this is called
―connecting to the service‖ in Samba-speak). The privileges on the share
will be translated to the permissions of the guest account. If the value were set to
―no‖ instead, the share would not be accessible by the general public, but only by
authenticated and permitted users.
Line 5, with the value of the directive set to ―no,‖ means that users of this service
may not create or modify the files stored therein.
5. Save your changes to the /etc/samba/smb.conf file, and exit the editor.You should
note that we have accepted all the other default values in the file. You may want
to go back and personalize some of the settings to suit your environment.
One setting you may want to change quickly is the directive (―workgroup‖) that defines
the workgroup. This controls what workgroup your server will appear to be in when
queried by clients or when viewed in the Windows Network Neighborhood.
Also note that the default configuration may contain other share definitions. You should
comment (or delete) those entries if it is not your intention to have them.
6. Use the testparm utility to check the smb.conf file for internal correctness (i.e.,
absence of syntax errors). Type
[root@serverA ~]# testparm –s | less
...<OUTPUT TRUNCATED>...
[samba-share]
comment = This folder contains shared documents
path = /tmp/testshare
guest ok = Yes
Study the output for any serious errors, and try to fix them by going back to correct them
in the smb.conf file.
Note that because you piped the output of testparm to the less command,you may have
to press q on your keyboard to quit the command.
7. Now restart (or start) Samba to make the software acknowledge your changes. Type
[root@serverA ~]# service smb restart
We are done creating our test share. In the next section, we will attempt to
access the share.
Using smbclient
The smbclient program is a command-line tool that allows your Linux-based system to
act as a Windows client. You can use this utility to connect to other Samba servers or
even to actual Microsoft Windows servers. smbclient is a flexible program and can be
used to browse other servers, send and retrieve files from them, or even print to them. As
you can imagine, this is also a great debugging tool, since you can quickly and easily
check whether a new Samba installation works correctly without having to find a
Windows client to test it.
In this section, we‘ll show you how to do basic browsing, remote file access, and
remote printer access with smbclient. However, remember that smbclient is a flexible
program, limited only by your imagination.
When configured to do so, Samba will honor requests from users that are stored in user
databases that are, in turn, stored in various back-ends—e.g., LDAP
(ldapsam,tdbsam,xmlsam) or MySQL (mysqlsam).
Here, we will add a sample user that already exists in the local /etc/passwd file
to Samba‘s user database. We will accept and use Samba‘s native/default user
database back-end (tdbsam) for demonstration purposes, as the other possibilities are
beyond the scope of this chapter.
Let‘s create a Samba entry for the user yyang. We will also set the user‘s Samba
password. Use the smbpasswd command to create a Samba entry for the user yyang.
Choose a good password when prompted to do so. Type
The new user will be created in Samba‘s default user database, tdbsam. With a Samba
user now created, you can make the shares available to only authenticated users, such as
the one we just created for the user yyang.
If the user yyang now wants to access a resource on the Samba server that has been
configured strictly for her use (a protected share or nonpublic share), the user can use the
smbclient command shown here; for example,
It is, of course, also possible to access a protected Samba share from a native Microsoft
Windows box. One only needs to supply the proper Samba username and corresponding
password when prompted on the Microsoft Windows system.
If you need to allow users to have no passwords (which is a bad idea, by the way, but for
which there might be legitimate reasons), you can do so by using the smbpasswd
program with the -n option, like so:
Users who prefer the command line over the web interface can use the smbpasswd
command to change their Samba passwords. This program works just like the regular
passwd program, except this program does not update the /etc/passwd file by default.
Because smbpasswd uses the standard protocol for communicating with the server
regarding password changes, you can also use this to change your password on a remote
Windows machine.
For example, to change the user yyang‘s Samba password, issue this command:
Samba can be configured to allow regular users to run the smbpasswd command
themselves to manage their own passwords; the only caveat is that they must know their
previous/old password.
Thus far, we‘ve been talking about using Samba in the Samba/Linux world. Or, to put it
literarily, we‘ve been using Samba in its native environment, where it is lord and master
of its domain (no pun intended). What this means is that our Samba server, in
combination with the Linux-based server, has been responsible for managing all user
authentication and authorization issues.
The simple Samba setup that we created earlier in the chapter had its own user database,
which mapped the Samba users to real Linux/UNIX users. This allowed any files and
directories created by Samba users to have the proper ownership contexts. But what if we
wanted to deploy a Samba server in an environment with existing Windows servers that
are being used to manage all users in the domain? And we don‘t want to have to manage
a separate user database in Samba? Enter ...the winbindd daemon.
The winbindd daemon is used for resolving user accounts (users and groups) information
from native Windows servers. It can also be used to resolve other kinds of system
information. It is able to do this through its use of pam_winbind (a PAM module that
interacts with the winbindd daemon to help authenticate users using Windows NTLM
authentication), the ntlm_auth tool (a tool used to allow external access to winbind‘s
NTLM authentication function), and libnss_winbind (winbind‘s Name Service Switch
library) facility.
The steps to set up a Linux machine to consult a Windows server for its user
authentication issues are straightforward. They can be summarized in this way:
1. Configure Samba‘s configuration file (smb.conf) with the proper directives.
2. Add winbind to the Linux system‘s name service switch facility (/etc/nsswitch.conf).
3. Join the Linux/Samba server to the Windows domain.
4. Test things out.
Here we present a sample scenario where a Linux server named serverA wishes to use a
Windows server for its user authentication issues. The Samba server is going to act as a
Windows domain member server. The Windows server we assume here is running the
Windows 200x Server operating system, and it is a domain controller (as well as the
WINS server). Its IP address is 192.168.1.100. The domain controller is operating in
mixed mode. (Mixed mode operation provides backward compatibility with Windows
NT–type domains, as well as Windows 200x–type domains.) The Windows domain name
is ―WINDOWS-DOMAIN.‖ We have commented out any share definitions in our Samba
configuration, so you‘ll have to create or specify your own (see the earlier parts of the
chapter for how to do this). Let‘s break down the process in better detail:
3. On Fedora, RHEL, and Centos distributions, start the winbindd daemon using the
service command. Type
[root@serverA ~]# service winbind start
Starting Winbind services: [ OK ]
4. Join the Samba server to the Windows domain using the net command. Assuming the
Windows Administrator account password, type
[root@serverA ~]# net rpc join -U root% windows_administrator_password
Joined domain WINDOWS-DOMAIN
where the password for the account in the Microsoft Windows domain with permission to
join systems to the domain is windows_administrator_password.
5. Use the wbinfo utility to list all users available in the Windows domain to make sure
that things are working properly. Type
[root@serverA ~]# wbinfo -u
TROUBLESHOOTING SAMBA
The following are a few typical solutions to simple problems one might encounter with
Samba:
Restart Samba This may be necessary because either Samba has entered an
undefined state or (more likely) you‘ve made major changes to the configuration
but forgot to reload Samba so that the changes take effect.
Make sure the configuration options are correct Errors in the smb.conf
file are typically in directory names, usernames, network numbers, and
hostnames. A common mistake is when a new client is added to a group that has
special access to the server, but Samba isn‘t told the name of the new client being
added. Don‘t forget that for syntax-type errors, the testparm utility is your ally.
Monitor encrypted passwords These may be mismatched—the server is
configured to use them and the clients aren‘t, or (more likely) the clients are using
encrypted passwords and Samba hasn‘t been configured to use them. If you‘re
under the gun to get a client working, you may just want to disable client-side
encryption using the regedit scripts that come with Samba‘s source code (see the
docs subdirectory).
Setting Up And Configuring A Linux Mail Server
Setting up Linux mail server and SMTP (Simple Mail Transfer Protocol) is essential if
you want to use email, so we‘re going to look at how we can install and configure mail
server along with some other email-related protocols, like Post Office Protocol (POP3)
and Internet Message Access Protocol (IMAP).
SMTP stands for Simple Mail Transfer Protocol (SMTP) and it‘s used for transmitting
electronic mail. It‘s platform-independent, so long as the server can send ASCII text and
can connect to port 25 (the standard SMTP port).
Sendmail and Postfix are two of the commonest SMTP implementations and are usually
included in most Linux distributions.
Sendmail is a free and popular mail server, but it‘s not all that secure and doesn‘t seem to
have been designed for ease of use, which is to say that it‘s a bit tricky to get to grips
with. Postfix is better in both these regards, however.
In order to configure a Linux mail server, you‘ll first need to check if Postfix is already
installed. It‘s the default mail server on the lion‘s share of Linux distributions these days,
which is good because server admins like it a lot.
For distributions based on Debian, like Ubuntu, you‘d install them like this:
$ apt-get -y install postfix
As you configure Linux mail server you will receive a prompt to choose how you want to
configure your Postfix mail server.
You‘ll be presented with these choices:
No configuration
Internet site
Internet with smarthost
Satellite system and Local only
Let‘s go with the No configuration option for our Linux email server.
After installing the Postfix mail server, you will need to set it up, and most of the files
you‘ll need for this can be found inside the /etc/postfix/ directory.
You can find the main configuration for Postfix Linux mail server in the
/etc/postfix/main.cf file.
This file contains numerous options like:
myhostname
Use this one to specify the hostname of the mail server, which is where postfix will
obtain its emails. The hostnames will look something like mail.mydomain.com,
smtp.mydomain.com.
You incorporate the hostname this way:
myhostname = mail.mydomain.com
exampledomain.com
This option is the mail domain that you will be servicing, like mydomain.com
The syntax looks like this:
mydomaindomain.com = mydomain.com
myorigin
All emails sent from this mail server will look as though they came from the one that you
specify in this option. You can set this to $exampledomain.com.
myorigin = $exampledomain.com
Use any value that you want for this option but put a dollar sign in front of it like this:
$exampledomain.com.
mydestination
This option shows you which domains the Postfix server uses for incoming emails to
your Linux email server. You can assign values like this:
This will let you arrange which servers can relay through your Postfix server.
It should only take local addresses like local mail scripts on your server.
If this isn‘t the case, then spammers can piggyback on your Linux mail server. That
means your lovely shiny server will be doing the heavy lifting for some bad guys and it
will also end up getting banned.
Here’s the syntax for this option:
mynetworks = 127.0.0.0/8, 192.168.1.0/24
smtpd_banner
This one determines what message is sent after the client connects successfully.
Consider changing the banner so it doesn‘t give away any potentially compromising
information about your server.
inet_protocols
This option designates which IP protocol version is used for server connections.
inet_protocols = ipv4
When you change any of files used to configure Linux mail server for Postfix, you must
reload the service, with this directive:
$ systemctl reload postfix
Of course, we all get distracted and typing things in can often result in mistakes, but you
can track down any misspellings that might compromise your Linux mail server using
this command:
$ postfix check
Things like network failure (and many other reasons) can mean that the mail queue on
your Linux email server can end up getting full, but you can check the Postfix mail queue
with this command:
$ mailq
If that reveals that its full then you can flush the queue using this command:
$ postfix flush
Look at it again and you should see that your Linux email server queue is clear.
The first thing to do is use a local mail user agent such as mailx or mail which is a
symlink to mailx.
Send your first test to someone on the Linux mail server and if that works then send the
next one to somewhere external.
$ echo "This is the body of the message" | mailx -s "Here we have a Subject" -r "for
instance <small example@mydomain.com>" -a /path/to/attachment
someone@mydomain.com
Then check if your Linux email server can pick up external mail.
If you run into any snags, have a peek at the logs. The Red Hat log file can be found in
/var/log/maillog and for Debian versions in /var/log/mail.log, or wherever else the
rsyslogd configuration specifies.
I would suggest you review the Linux syslog server for an in-depth clarification on logs
and how to set up rsyslogd.
If you run into any more difficulties, take a look at your DNS settings and use Linux
network commands to check your MX records.
Fight Spam with SpamAssassin
Nobody likes spam, and SpamAssassin is probably the best free, open source spam
fighting ninja that you could hope to have in your corner.
Installing it is as simple as doing this:
Once you‘ve done that, you can see how it‘s configured in the
/etc/mail/spamassassin/local.cf file.
SpamAssassin runs a number of scripts to test how spammy an email is. The higher the
score that the scripts deliver, the more chances there are that it‘s spam.
In the configuration file, if the parameter required_hits is 6, this tells you that
SpamAssassin will consider an email to be spam if it scores 6 or more.
The report_safe command will have values of 0, 1, or 2. A 0 tells you that email marked
as spam is sent without modification, and only the headers will label it as spam.
A 1 or a 2 means that a new report message will be created by SpamAssassin and
delivered to the recipient.
A value of 1 indicates that the spam message is coded as content message/rfc822, and if
it‘s a 2, that means the message has been coded as text or plain content.
Text or plain is less dangerous because some mail clients execute message/rfc822, which
is not good if they contain any kind of malware.
The next thing to do is integrate it into Postfix, and the easiest way to do that is with
procmail. We‘ll make a file called/etc/procmailrc, and add this to it:
:0 hbfw | /usr/bin/spamc
Then we‘ll edit the Postfix configuration file /etc/postfix/main.cf and alter the
mailbox_command, thus:
mailbox_command = /usr/bin/procmail
Unfortunately, SpamAssassin can‘t catch everything, and spam messages can still sneak
through to fill up the mailboxes on your Linux email server.
But never fear because you can filter messages before they even get to the Postfix server
with Realtime Blackhole Lists (RBLs).
Open the Postfix server configuration at /etc/postfix/main.cf and change
smtpd_recipient_restrictions option by adding the following options like this:
strict_rfc821_envelopes = yes
relay_domains_reject_code = 554
unknown_address_reject_code = 554
unknown_client_reject_code = 554
unknown_hostname_reject_code = 554
unknown_local_recipient_reject_code = 554
unknown_relay_recipient_reject_code = 554
unverified_recipient_reject_code = 554
smtpd_recipient_restrictions =
reject_invalid_hostname,
reject_unknown_recipient_domain,
reject_unauth_pipelining,
permit_mynetworks,
permit_sasl_authenticated,
reject_unauth_destination,
reject_rbl_client dsn.rfc-ignorant.org,
reject_rbl_client dul.dnsbl.sorbs.net,
reject_rbl_client list.dsbl.org,
reject_rbl_client sbl-xbl.spamhaus.org,
reject_rbl_client bl.spamcop.net,
reject_rbl_client dnsbl.sorbs.net,
permit
Now, restart your postfix Linux mail server:
$ systemctl restart postfix
The above RBLs are the most common ones found, but there are plenty more on the web
for you to track down and try.
POP3 and IMAP Protocol Basics
We now know how a SMTP Linux mail server sends and receives emails, but what about
other user needs, like when they want local copies of emails to view off-line?
mbox file format isn‘t supported; it‘s used by many mail user agents such as mailx and
mutt. Due to security concerns, some mail servers restrict access to the shared mail spool
directories. Another class of protocols—called mail access protocols—was introduced to
deal with such situations.
The commonest ones are POP and IMAP – Post Office Protocol and Internet Message
Access Protocol. POP‘s underlying methodology is very simple: a central Linux mail
server is online 24/7 for reception and storage of all user emails.
When an email is sent, the email client relays it through the central Linux mail server
using SMTP. Be aware that the SMTP server and POP server can easily be on the same
system, and that this is a common thing to do.
IMAP was developed because previously you couldn‘t keep a master copy of a user‘s
email on the server.
With IMAP, your Linux email server supports three kinds of access:
online mode is like having direct access to the Linux email server file system.
offline mode feels like POP, where the client only connects to the network to get
their mail, and the server won‘t keep a copy.
disconnected mode lets users keep cached copies of their emails and the server
keeps one too.
There are a few different implementations for IMAP and POP, with the most prevalent
being dovecot server, which offers both.
POP3, POP3S, IMAP, and IMAPS listen on ports 110, 995, 143, and 993 respectively.
Dovecot Installation
Dovecot is preinstalled on the majority of Linux distributions, and there‘s no problem
putting it in Red Hat too:
For Debian, a pair of packages provide the IMAP and POP3 functionality. Here‘s how to
install them:
You will be prompted to create self-signed certificates for using IMAP and POP3 over
SSL/TLS. Select yes and type in the hostname of your system when asked to do so.
Then you can run the service and activate it at start-up like this:
$ systemctl start dovecot
$ systemctl activate dovecot
Configure Dovecot
The main configuration file for Dovecot is /etc/dovecot/dovecot.conf file.
Some varieties of Linux keep the configuration in the/etc/dovecot/conf.d/ directory and
then have the include directive include the settings in the files.
Here are a few of the parameters used to configure dovecot:
protocols: the ones you want to support.
protocols = imap pop3 lmtp
lmtp stands for local mail transfer protocol.
listen: IP addresses to listen on.
listen = *, ::
The asterisk means all ipv4 interfaces and :: means all ipv6 interfaces
userdb: user database to authenticate users.
userdb { driver = pam }
passdb: password database two authenticate users.
passdb { driver = passwd }
mail_location: this entry is in the /etc/dovecot/conf.d/10-mail.conf file, and it‘s written
like this:
mail_location = mbox:~/mail:INBOX=/var/mail/%u
Secure Dovecot
Dovecot features generic SSL certificates and key files used with /etc/dovecot/conf.d/10-
ssl.conf
ssl_cert = </etc/pki/dovecot/certs/dovecot.pem
ssl_key = </etc/pki/dovecot/private/dovecot.pem
If you try to connect to a dovecot server and certificates haven‘t been signed, then you‘ll
get a warning, but if you go to a certificate authority you can buy one, so no worries
there.
Alternatively, you can point to them using Let‘s Encrypt certificates:
ssl_cert = </etc/letsencrypt/live/yourdomain.com/fullchain.pem
ssl_key = </etc/letsencrypt/live/yourdomain.com/privkey.pem
You‘ll need to open dovecot server ports in your iptables firewall by adding iptables rules
for ports 110, 995, 143, 993, 25.
Do that and save the rules.
Or if you have a firewall then do this:
References:
Using Samba, A File & Print Server for Linux, Unix & Mac OS X, Gerald
Carter, Jay Ts, Robert Eckstein, ISBN-10:978-0-596-00769-0
Linux System Administration Recipes 1st Edition, by Kemp Juliet, Publisher: Springer-
Verlag Berlin and Heidelberg GmbH & Co. KG
Linux: The Complete Reference, Sixth Edition, by Richard Pearson, Tata McGraw Hill
Company Limited.
Www.samba.org
www.redhat.com
www.web.mit.edu
wiki.dovcot.org
www.plesk.com
UNIT -8
What is a Firewall?
A firewall is a security device that monitors network traffic. It protects the internal
network by filtering incoming and outgoing traffic based on a set of established rules.
Setting up a firewall is the simplest way of adding a security layer between a system and
malicious attacks.
How Does a Firewall Work?
Circuit-Level Gateways
Circuit-level gateways are a type of firewall that work at the session layer of the OSI
model, observing TCP (Transmission Control Protocol) connections and sessions. Their
primary function is to ensure the established connections are safe.
In most cases, circuit-level firewalls are built into some type of software or an already
existing firewall.
Like pocket-filtering firewalls, they don‘t inspect the actual data but rather the
information about the transaction. Additionally, circuit-level gateways are practical,
simple to set up, and don‘t require a separate proxy server.
Stateful Inspection Firewalls
A stateful inspection firewall keeps track of the state of a connection by monitoring the
TCP 3-way handshake. This allows it to keep track of the entire connection – from start
to end – permitting only expected return traffic inbound.
When starting a connection and requesting data, the stateful inspection builds a database
(state table) and stores the connection information. In the state table, it notes the source
IP, source port, destination IP, and destination port for each connection. Using the
stateful inspection method, it dynamically creates firewall rules to allow anticipated
traffic.
This type of firewall is used as additional security. It enforces more checks and is safer
compared to stateless filters. However, unlike stateless/packet filtering, stateful firewalls
inspect the actual data transmitted across multiple packets instead of just the headers.
Because of this, they also require more system resources.
Proxy Firewalls:
A proxy firewall serves as an intermediate device between internal and external systems
communicating over the Internet. It protects a network by forwarding requests from the
original client and masking it as its own. Proxy means to serve as a substitute and,
accordingly, that is the role it plays. It substitutes for the client that is sending the request.
When a client sends a request to access a web page, the message is intersected by the
proxy server. The proxy forwards the message to the web server, pretending to be the
client. Doing so hides the client‘s identification and geolocation, protecting it from any
restrictions and potential attacks. The web server then responds and gives the proxy the
requested information, which is passed on to the client.
Next-Generation Firewalls
The next-generation firewall is a security device that combines a number of functions of
other firewalls. It incorporates packet, stateful, and deep packet inspection. Simply put,
NGFW checks the actual payload of the packet instead of focusing solely on header
information.
Unlike traditional firewalls, the next-gen firewall inspects the entire transaction of data,
including the TCP handshakes, surface-level, and deep packet inspection.
Using NGFW is adequate protection from malware attacks, external threats, and
intrusion. These devices are quite flexible, and there is no clear-cut definition of the
functionalities they offer. Therefore, make sure to explore what each specific option
provides.
Cloud Firewalls:
A cloud firewall or firewall-as-a-service (Faas) is a cloud solution for network protection.
Like other cloud solutions, it is maintained and run on the Internet by third-party vendors.
Clients often utilize cloud firewalls as proxy servers, but the configuration can vary
according to the demand. Their main advantage is scalability. They are independent of
physical resources, which allows scaling the firewall capacity according to the traffic
load.
Businesses use this solution to protect an internal network or other cloud infrastructures
(Iaas/Paas).
CLOUD FIREWALLS
Advantages Disadvantages Protection Level Who is it for:
– Availability. – A wide range of – Provide good – A solution suitable
– Scalability that offers prices depending on the protection in terms of for larger businesses
increased bandwidth services offered. high availability and that do not have an in-
and new site protection. – The risk of losing having a professional staff security team to
– No hardware control over security staff taking care of the maintain and manage
required. assets. setup. the on-site security
– Cost-efficient in – Possible devices.
terms of managing and compatibility
maintaining equipment. difficulties if migrating
to a new cloud
provider.
Once started, the toolbar provides buttons to allow the firewall to be enabled/disabled.
You can also configure basic trusted services, such as SSH, FTP and HTTP, by putting a
tick in the appropriate checkbox and clicking the "Apply" button on the toolbar.
Fi
g. 8.4 Firewall configuration in Linux
The "Other Ports" section allows you to open ports that are not covered in the "Trusted
Services" section.
Fig. 8.5 Other ports
Setting Up a Firewall with iptables:
Most installations will include the firewall functionality. If you need to manually install
it, the following commands will install the IP4 and IP6 firewall functionality. In this
article we will only consider the IP4 settings.
# yum install iptables
System-config-firewall-tui:
The TUI utility is similar to the GUI utility shown above, but it feels incredibly clumsy in
comparison. If it is not already present, it can be installed using the following command.
# yum install system-config-firewall-tui
Running the system-config-firewall-tui command from the command line produces the
top-level screen, allowing you to enable/disable the firewall. Use the space bar to toggle
the setting, the tab key to navigate between buttons and the return key to click them.
Fig. 8.7 Firewall sytem configuration
To alter the Trusted Services, tab to the "Customize" button and press the return key.
Amend the list using the arrow and space keys.
Iptables:
In addition to the GUI and TUI interfaces, the firewall rules can be amended directly
using the iptables command.
Fig. 8.9 The iptable routing
The firewall consists of chains of rules that determine what action should be taken for
packets processed by the system. By default, there are three chains defined:
INPUT : Used to check all packets coming into the system.
OUPUT : Used to check all packets leaving the system.
FORWARD : Used to check all packets being routed by the system. Unless you are
using your server as a router, this chain is unnecessary.
Each chain can contain multiple explicit rules that are checked in order. If a rule matches,
the associated action (ACCEPT and DROP being the most common) is taken. If no
specific rule is found, the default policy is used to determine the action to take.
Since the default policy is a catch-all, one of two basic methods can be chosen for each
chain.
Set the default policy to ACCEPT and explicitly DROP things you don't want.
Set the default policy to DROP and explicitly ACCEPT things you do want.
The safest option is to set the default policy to DROP for the INPUT and FORWARD
chains, so it is perhaps a little surprising that the GUI and TUI tools set the default
policies to ACCEPT, then use an explicit REJECT as the last rule in these chains.
Fig. 8.10 The iptables options
The default policy for a chain is set using the "-P" flag. In the following example,
assuming no specific rules were present, all communication to and from the server would
be prevented.
# iptables -P INPUT DROP
# iptables -P FORWARD DROP
# iptables -P OUTPUT DROP
Warning: If you are administering the firewall via SSH, having a default INPUT policy
of DROP will cut your session off if you get rid of the explicit rules that accept SSH
access. As a result, it makes sense to start any administration by setting the default
policies to ACCEPT and only switch them back to DROP once the chains have been built
to your satisfaction. The following example temporarily sets the default policies to
ACCEPT.
# iptables -P INPUT ACCEPT
# iptables -P FORWARD ACCEPT
# iptables -P OUTPUT ACCEPT
The next thing we want to do if flush any existing rules, leaving just the default policies.
This is done using the "-F" flag.
# iptables -F
Now we need to define specific rules for the type of access we want the server to have.
Focusing on the INPUT chain, we can grant access to packets in a number of ways.
# Accept packets from specific interfaces.
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i eth0 -j ACCEPT
Rule and policy definitions take effect immediately.To make sure they persists beyond
reboot the current configuration must be saved to the "/etc/sysconfig/iptables" file using
the following command.
If you are using Fedora, you may need to use the following command instead.
As you can imagine, even in a simple configuration this process can get a bit long-
winded, so it makes sense to combine all the elements of the firewall definition into a
single file so it can be amended and run repeatedly. Create a file called "/root/firewall.sh"
with the following contents. Think of this as your starting point for each server.
Fig. 8.12 The firewall.sh file
The iptables command also allows you to insert (-I), delete (-D) and replace (-R) rules,
but if you work using a file as described above, you never need to use these variations.
If you are using the server as an Oracle database server, you will probably want to make
sure the SSH and Oracle listener ports are accessible. You could lock these down to
specific source IP addresses, but for a quick setup, you could just do the following, where
"1521" is the port used for the listener.
# service iptables start
# chkconfig iptables on
# iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# iptables -A INPUT -p tcp --dport 1521 -j ACCEPT
# service iptables save
# service iptables status
Packets from A to B will pass the router, in an appearently transparent LAN. Considering
there is no link with the internet, and all clients are ‗trusted‘ desktop PC‘s, we barely
need the firewall functionality and in this topology. The router configuration is very
simple.
First, we set the default action for everything that is forwarded by the router, to DROP.
This
disables all traffic between the two networks that is not explicitly allowed.
Second, we ACCEPT traffic from Network A to Network B, if the destination port is port
80 (HTTP), and the protocol is tcp. We allow all NEW and ESTABLISHED connections.
Third, we ACCEPT the responses which of course start at Network B and travel to the
client in
Network A. This is an already established connection, and we know the source port (the
socket
which the server uses to respond to the client), is port 80 (HTTP).
Here‘s another example, to show you how clients in Network A can also use the mail-,
web-, and POP/IMAP- servers in Network B.
# iptables --policy FORWARD DROP
# iptables --append FORWARD --source 192.168.1.0/24 --destination 192.168.2.0/24 --match
state --state NEW,ESTABLISHED --protocol tcp \
--match multiport --destination-ports 25,80,110,143 -j ACCEPT
# iptables --append FORWARD --source 192.168.2.0/24 --destination 192.168.1.0/24 --match
state --state ESTABLISHED \
--match multiport --source-ports 25,80,110,143 -j ACCEPT
You can of course extend these commands to match your requirements. Notice the above
examples only limit the network traffic between the two networks, and not traffic to, or
from, the firewall itself.
The above network topology requires the router to use one public IP address for packets
from the SOHO Network to the Internet. Also, the router should accept inbound packets
that are related to connections initiated from the SOHO Network (responses etc.). Notice
that there is no modem and no provider supplied router between our router and the
Internet. For the very basic router setup using iptables, you would use:
# iptables --policy INPUT DROP
# iptables --policy FORWARD DROP
# iptables --policy OUTPUT DROP
# iptables --append INPUT --in-interface eth1 --source 192.168.1.0/24 --match state --state
NEW,ESTABLISHED --jump ACCEPT
# iptables --append OUTPUT --out-interface eth1 --destination 192.168.1.0/24 --match state --state
NEW,ESTABLISHED --jump ACCEPT
# iptables --append FORWARD --in-interface eth1 --source 192.168.1.0/24 --destination 0.0.0.0/0
--match state --state NEW,ESTABLISHED --jump ACCEPT
# iptables --append FORWARD --in-interface eth0 --destination 192.168.1.0/24 --match state --
state ESTABLISHED --jump ACCEPT
# iptables --table nat --append POSTROUTING --out-interface eth0 --jump MASQUERADE
The router now forwards packets between the two networks, masquerades the outgoing
packets from the SOHO Network (so responses at least come back to the router again),
and enables management from the SOHO Network as well. Notice that we allow NEW
connections from the LAN to the Internet, but not the other way around. Also, because
we use MASQUERADE, our router/firewall will only have to FORWARD traffic that
comes back from the Internet as a reponse, because, as you may remember from the
Packet Processing Overview earlier in this document, as a reply comes back from the
Internet, our router will use the nat table to match the existing connection, apply
PREROUTING destination NAT as the packet comes in, and hit the FORWARD chain
with a new destination IP address.
A SOHO Network with a Seperate Router and Firewall:
The previous scenario suggests there is one single network device, apart from switches,
hubs and modems, between the SOHO Network and the Internet. In most topologies, this
is not the case. Many SOHO Networks have a router from the Internet provider to
connect to the Internet. If that is the case, the network topology changes:
eth1 eth0
SOHO Network -------------[ ROUTER B ]-------------[ ROUTER A ]----------- Internet
192.168.2.0/24 192.168.2.1 192.168.1.2 192.168.1.1 public-ip 0.0.0.0/0
Where:
* ROUTER A is the router of the Internet provider.
* ROUTER B is your router
ROUTER A will now NAT all incoming packets to 192.168.1.2, and MASQUERADE all
outgoing packets with the public IP. The incoming packets will end up with ROUTER B,
which is also the firewall.
Suppose there is a webserver in the SOHO Network, which must be available to the
public (the Internet) as well as the clients on the LAN. This means you need to forward
all requests to ROUTER B, port 80 to the webserver (suppose this webserver is at
192.168.2.20):
# iptables -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.2.20<br />
The same goes for other services that are hosted on the LAN.
Configuring a Web Server:
Introducing Apache:
The Apache HTTP server is the most widely-used web server in the world. It provides
many powerful features, including dynamically loadable modules, robust media support,
and extensive integration with other popular software.
Configuring Apache:
Output
Available applications:
Apache
Apache Full
Apache Secure
OpenSSH
Let‘s enable the most restrictive profile that will still allow the traffic you‘ve configured,
permitting traffic on port 80 (normal, unencrypted web traffic):
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Apache ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Apache (v6) ALLOW Anywhere (v6)
Check with the systemd init system to make sure the service is running by typing:
Access the default Apache landing page to confirm that the software is running properly
through your IP address:
When using the Apache web server, you can use virtual hosts (similar to server blocks in
Nginx) to encapsulate configuration details and host more than one domain from a single
server. We will set up a domain called your_domain, but you should replace this with
your own domain name.
The permissions of your web roots should be correct if you haven‘t modified your
unmask value, but you can make sure by typing:
Paste in the following configuration block, updated for our new directory and domain
name:
Output
Syntax OK
Apache should now be serving your domain name. You can test this by navigating to
http://your_domain, where you should see something like this:
Implementing SSI:
What is SSI?
SSI stands for Server Side Includes. As the name suggests, they are simple server side
scripts that are typically used as directives inside html comments.
Where to use SSI? There are several ways to SSI. The two most common reason to use
SSI are to serve a dynamic content on your web page, and to reuse a code snippet as
shown below.
For example, to display current time on your html page, you can use server side includes.
You don‘t need to use any other special server side scripting languages for it.
The following html code snippet shows this example. The line highlighted in bold is an
SSI script.
Fig. 8.17 The SSI Script
You can also use SSI to reuse a html snippet on multiple pages. This is very helpful to
reuse header and footer information of a site on different pages.
This is the index.html, which includes both header and footer using server side includes.
Similar to including a html page using SSI, you can also include the output of cgi script
to the html using the following line:
We can instruct the webserver to interpret Server Side Includes either using .htaccess or
modifying the web-server config file directly.
Create .htaccess file in your web root and add the following lines of code:
The above lines instruct the web server to parse the .html extension for the server side
includes present in it.
We can also instruct the server to parse the file with custom extensions as well. For
example, we can use the following lines for parsing the ―.shtml‖ file extensions.
Similarly for parsing the cgi script we can add following lines:
On Apache web server, the following directive lines should be present in httpd.conf file
for SSI
The first line tells Apache to allow the file to be parsed for SSI. The other lines tells the
extension of the file to be parsed.
To enable CGI in your Apache server. you need to Load module file mod_cgi.so or
mod_cgid.so in your Apache configuration file.
The CentOS, Red Hat, Fedora and other rpm based distributions edit
/etc/httpd/conf.modules.d/XX-cgi.conf configuration file and make sure below showing
lines are not commented.
Ubuntu, Debian, LinuxMint and other Debian derivatives use the following command to
enable CGI module. This command creates a soft link of the module configuration file to
/etc/apache2/mod-enabled/ directory.
After enabling CGI modules in Apache configuration you need to restart Apache service
on your system for changes take effect.
Installing PHP
PHP is the component of your setup that will process code to display dynamic content. It
can run scripts, connect to your MySQL databases to get information, and hand the
processed content over to your web server to display.
Once again, leverage the apt system to install PHP. In addition, include some helper
packages this time so that PHP code can run under the Apache server and talk to your
MySQL database:
This should install PHP without any problems. We‘ll test this in a moment.
In most cases, you will want to modify the way that Apache serves files when a directory
is requested. Currently, if a user requests a directory from the server, Apache will first
look for a file called index.html. We want to tell the web server to prefer PHP files over
others, so make Apache look for an index.php file first.
To do this, type this command to open the dir.conf file in a text editor with root
privileges:
Move the PHP index file (highlighted above) to the first position after the DirectoryIndex
specification, like this:
When you are finished, save and close the file by pressing CTRL+X. Confirm the save by
typing Y and then hit ENTER to verify the file save location.
After this, restart the Apache web server in order for your changes to be recognized. Do
this by typing this:
You can also check on the status of the apache2 service using systemctl:
The SSL protocol is a standard security technology used to establish an encrypted link
between a web server and a web client. SSL facilitates secure network communication by
identifying and authenticating the server as well as ensuring the privacy and integrity of
all transmitted data. Since SSL prevents eavesdropping on or tampering with information
sent over the network, it should be used with any login or authentication mechanism and
on any network where communication contains confidential or proprietary information.
The use of SSL ensures that names, passwords, and other sensitive information cannot be
deciphered as they are sent between the Web Adaptor and the server. When you use SSL,
you connect to your web pages and resources using the HTTPS protocol instead of
HTTP.
In order to use SSL, you need to obtain an SSL certificate and bind it to the website that
hosts the Web Adaptor. Each web server has its own procedure for loading a certificate
and binding it to a website.
To be able to create an SSL connection between the Web Adaptor and your server, the
web server requires an SSL certificate. An SSL certificate is a digital file that contains
information about the identity of the web server. It also contains the encryption technique
to use when establishing a secure channel between the web server and ArcGIS Server. An
SSL certificate must be created by the owner of the website and digitally signed. There
are three types of certificates, CA-signed, domain, and self-signed, which are explained
below.
CA-signed certificates
Certificate authority (CA) signed certificates should be used for production systems,
particularly if your deployment of ArcGIS Server is going to be accessed from users
outside your organization. For example, if your server is not behind your firewall and
accessible over the Internet, using a CA-signed certificate assures clients from outside
your organization that the identity of the website has been verified.
In addition to being signed by the owner of the website, an SSL certificate may be signed
by an independent CA. A CA is usually a trusted third party that can attest to the
authenticity of a website. If a website is trustworthy, the CA adds its own digital
signature to that website's self-signed SSL certificate. This assures web clients that the
website's identity has been verified.
Fig. 8.22
Create domain certificate
In the Distinguished Name Properties dialog box, enter the required information
for the certificate:
For the Common name, you must enter the fully qualified domain name of the machine,
for example, gisserver.domain.com.
For the other properties, enter the information specific for your organization and
location.
Click Next.
In the Online Certification Authority dialog box, click Select and choose the
certification authority within your domain that will sign the certificate. If this option is
unavailable, enter your domain certification authority in the Specify Online Certification
Authority field, for example, City Of Redlands Enterprise
Root\REDCASRV.empty.local. If you need help with this step, consult your system
administrator.
Fig.
8.23 Online Certification Authority
Enter a user-friendly name for the domain certificate and click Finish.
The final step is for you to bind the domain certificate to SSL port 443.
Self-signed certificates
An SSL certificate signed only by the owner of the website is called a self-signed
certificate. Self-signed certificates are commonly used on websites that are only available
to users on the organization's internal (LAN) network. If you communicate with a
website outside your own network that uses a self-signed certificate, you have no way to
verify that the site issuing the certificate really represents the party it claims to represent.
You could actually be communicating with a malicious party, putting your information at
risk.
In the Connections pane, select your server in the tree view and double-click Server
Certificates.
In the Actions pane, click Create Self-Signed Certificate.
Enter a user-friendly name for the new certificate and click OK.
The final step is for you to bind the self-signed certificate to SSL port 443.
Once you've created an SSL certificate, you'll need to bind it to the website hosting the
Web Adaptor. Binding refers to the process of configuring the SSL certificate to use port
443 on the website. The instructions for binding a certificate with the website vary
depending on the platform and version of your web server. For instructions, consult your
system administrator or your web server's documentation. For example, the steps for
binding a certificate in IIS are below.
Select your site in the tree view and in the Actions pane, click Bindings.
If port 443 is not available in the Bindings list, click Add. From the Type drop-down
list, select https. Leave the port at 443.
From the SSL certificate drop-down list, select your certificate name and click OK.
References:
UNIX and Linux System Administration Handbook 4th Edition by Evi Nemeth, Pearson
Education
Linux System Administration Recipes 1st Edition, by Kemp Juliet, Publisher: Springer-
Verlag Berlin and Heidelberg GmbH & Co. KG
Linux: The Complete Reference, Sixth Edition, by Richard Pearson, Tata McGraw Hill
Company Limited.
www.linuxhomenetworking.com
www.opensource.com
www.linux.com
www.phoenixnap.com
www.booleanworld.com
www.linuxtoday.com