[go: up one dir, main page]

0% found this document useful (0 votes)
2 views83 pages

CCS340 - Cyber Security Lab Manual

The document outlines the guidelines and objectives for the CCS340 Cyber Security Laboratory at St. Peter's College of Engineering and Technology. It includes mandatory requirements for students, the institution's vision and mission, program educational objectives, course outcomes, and a list of experiments to be conducted. The laboratory aims to equip students with knowledge and skills in cyber security, cyber crime, and various intrusion techniques.

Uploaded by

praveenleion2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views83 pages

CCS340 - Cyber Security Lab Manual

The document outlines the guidelines and objectives for the CCS340 Cyber Security Laboratory at St. Peter's College of Engineering and Technology. It includes mandatory requirements for students, the institution's vision and mission, program educational objectives, course outcomes, and a list of experiments to be conducted. The laboratory aims to equip students with knowledge and skills in cyber security, cyber crime, and various intrusion techniques.

Uploaded by

praveenleion2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

CCS340 – CYBER SECURITY

LABORATORY

Prepared By
Mr. Murugesh Pandian
Assistant Professor/ IT

St. PETER'S
COLLEGE OF ENGINEERING AND TECHNOLOGY
Affiliated to Anna University | Approved by AICTE
Avadi, Chennai – 600 054
www. spcet.ac.in
St. PETER’S
COLLEGE OF ENGINEERING AND TECHNOLOGY
Affiliated to Anna University | Approved by AICTE
AVADI, CHENNAI - 600054.

INSTRUCTIONS TO STUDENTS

 Before entering the lab the student should carry the following things
(MANDATORY)
1. Identity card issued by the college.
2. Class notes
3. Lab observation book
4. Lab Manual
5. Lab Record
 Student must sign in and sign out in the register provided when attending thelab
session without fail.
 Come to the laboratory in time. Students, who are late more than 15 min.,will
not be allowed to attend the lab.
 Students need to maintain 100% attendance in lab if not a strict action will be
taken.
 All students must follow a Dress Code while in the laboratory
 Foods, drinks are NOT allowed.
 All bags must be left at the indicated place.
 Refer to the lab staff if you need any help in using the lab.
 Respect the laboratory and its other users.
 Workspace must be kept clean and tidy after experiment is completed.
 Read the Manual carefully before coming to the laboratory and be sure about
what you are supposed to do.
 Do the experiments as per the instructions given in the manual.
 Copy all the programs to observation which are taught in class beforeattending the
lab session.
 Students are not supposed to use floppy disks, pen drives withoutpermission of
lab- incharge.
 Lab records need to be submitted on or before the date of submission.
St. PETER’S
COLLEGE OF ENGINEERING AND TECHNOLOGY
Affiliated to Anna University | Approved by AICTE
AVADI, CHENNAI - 600054.

INSTITUTION VISION
To emerge as an Institution of Excellence by providing High Quality Education in Engineering,
Technology and Management to contribute for the economic as well as societal growth of our
Nation.

INSTITUTION MISSION
 To impart strong fundamental and Value-Based Academic knowledge in various Engineering,
Technology and Management disciplines to nurture creativity.
 To promote innovative Research and Development activities by collaborating with Industries,
R&D organizations and other statutory bodies.
 To provide conducive learning environment and training so as to empower the students with
dynamic skill development for employability.
 To foster Entrepreneurial spirit amongst the students for making a positive impact on
remarkable community development.
St. PETER’S
COLLEGE OF ENGINEERING AND TECHNOLOGY
Affiliated to Anna University | Approved by
AICTEAVADI, CHENNAI - 600054.

DEPARTMENT OF INFORMATION TECHNOLOGY

VISION
 To emerge as a center of academic excellence to meet the industrial needs of the
competitive world with IT technocrats and researchers for the social and economic
growth of the country in the area of Information Technology

MISSION
 To provide quality education to the students to attain new heights in IT industry and
research

 To create employable students at national/international level by training them with


adequate skills

 To produce good citizens with high personal and professional ethics to serve both the IT
industry and society.
St. PETER’S
COLLEGE OF ENGINEERING AND TECHNOLOGY
Affiliated to Anna University | Approved by
AICTEAVADI, CHENNAI - 600054.

PROGRAM EDUCATIONAL OBJECTIVES


Graduates will be able to

 PEO1: Demonstrate technical competence with analytical and critical thinking to


understand and meet the diversified requirements of industry, academia and research.

 PEO2: Exhibit technical leadership, team skills and entrepreneurship skills to


provide business solutions to real world problems.

 PEO3: Work in multi-disciplinary industries with social and environmental


responsibility, work ethics and adaptability to address complex engineering and
social problems

 PEO4: Pursue lifelong learning, use cutting edge technologies and involve in
applied research to design optimal solutions.
St. PETER’S
COLLEGE OF ENGINEERING AND TECHNOLOGY
Affiliated to Anna University | Approved by
AICTEAVADI, CHENNAI - 600054.

PROGRAM OUTCOME
Program Outcome describe the knowledge, skills and attitudes the students should acquires at the
end of a four years of engineering program

PO-1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.

PO-2. Problem analysis: Identify, formulate, review research literature and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.

PO-3. Design/ development of solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.

PO-4. Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of the
information to provide valid conclusions.

PO-5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.

PO-6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.

PO-7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.
PO-8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.

PO-9. Individual and team work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.

PO-10. Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive clear
instructions.

PO-11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and leader
in a team, to manage projects and in multidisciplinary environments.

PO-12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
St. PETER’S
COLLEGE OF ENGINEERING AND TECHNOLOGY
Affiliated to Anna University | Approved by AICTE
AVADI, CHENNAI - 600054.

PROGRAM SPECIFIC OUTCOMES

To ensure graduates

PSO-1: Have proficiency in programming skills to design, develop and apply appropriate
techniques, to solve complex engineering problems.

PSO-2: Have knowledge to build, automate and manage business solutions using cutting edge
technologies.

PSO-3: Have excitement towards research in applied computer technologies.


CCS340 – CYBER SECURITY LABORATORY

COURSE OUTCOMES:

CO1: Explain the basics of cyber security, cyber crime and cyber law (K2)
CO2: Classify various types of attacks and learn the tools to launch the attacks (K2)
CO3 Apply various tools to perform information gathering (K3)
CO4: Apply intrusion techniques to detect intrusion (K3)
CO5: Apply intrusion prevention techniques to prevent intrusion (K3)

CO - PO MAPPING

POs
PSOs

COs PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PO- PSO PSO PSO
1 2 3 4 5 6 7 8 9 10 11 12 -1 -2 -3

CO- 1 1 1 1 - 1 - - - - 1 - 2 2 2
1

CO- 1 3 1 3 2 1 - - - - - - 2 2 1
2

CO- 2 1 1 1 - 1 - - - - 1 - 2 2 2
3

CO- 3 3 2 2 2 1 - - - - - - 2 2 3
4

CO- 3 2 1 1 1 1 - 1 - - 1 - 2 2 2
5

Avg 2 2 1.2 1.6 1 1 0 0.2 0 0 0.6 0 2 2 2

1 - low, 2 - medium, 3 - high, “-”- no correlation


CCS340 – CYBER SECURITY LABORATORY

COURSE OBJECTIVES:
 To learn cybercrime and cyber law.
 To understand the cyber-attacks and tools for mitigating them.
 To understand information gathering.
 To learn how to detect a cyber-attack.

LIST OF EXPERIMENTS:

1. Install Kali Linux on Virtual box


2. Explore Kali Linux and bash scripting
3. Perform open source intelligence gathering using Netcraft, Whois Lookups, DNS
Reconnaissance, Harvester and Maltego
4. Understand the nmap command d and scan a target using nmap
5. Install metasploitable2 on the virtual box and search for unpatched vulnerabilities
6. Use Metasploit to exploit an unpatched vulnerability
7. Install Linus server on the virtual box and install ssh
8. Use Fail2banto scan log files and ban Ips that show the malicious signs
9. Launch brute-force attacks on the Linux server using Hydra.
10. Perform real-time network traffic analysis and data pocket logging using Snort
INEDX SHEET
Exp. Date of Name of the Experiment Date of Marks Sign
No. Plan Conduction

1. Install Kali Linux on Virtual box


2. Explore Kali Linux and bash scripting
Perform open source intelligence gathering
3.
using Netcraft, Whois Lookups, DNS
Reconnaissance, Harvester and Maltego
4. Understand the nmap command d and scan
a target using nmap
5. Install metasploitable2 on the virtual box
and search for unpatched vulnerabilities
6. Use Metasploit to exploit an unpatched
vulnerability
7. Install Linus server on the virtual box and
install ssh
8. Use Fail2banto scan log files and ban Ips
that show the malicious signs
9. Launch brute-force attacks on the Linux
server using Hydra.
10. Perform real-time network traffic analysis
and data pocket logging using Snort

Internal Marks Awarded: _______________

Signature of the Faculty Member Head of the Department


1 . INSTALL KALI LINUX ON VIRTUAL BOX

AIM:

To install Kali Linux on Virtual Box

INTRODUCTION:

Kali Linux is a Debian-derived Linux distribution designed for penetration testing. With over 600
preinstalled penetration-testing programs, it earned a reputation as one of the best-operating systems used
for security testing. As a security-testing platform, it is best to install Kali as a VM on VirtualBox. Kali
has a rolling release model, ensuring up-to-date tools on your system. Also, there is an active community
of u sers providing ongoing support.

This step by step tutorial shows you how to install Kali Linux on VirtualBox.

PREREQUISITES

 At least 20 GB of disk space


 At least 1 GB of RAM (preferably 2) for i386 and amd64 architectures
 VirtualBox (or alternative virtualization software)
STEPS FOR INSTALLING KALI LINUX ON VIRTUALBOX

Since these instructions take you through the installation process in a virtual environment,
you need to ensure you have one set up on your system. In this article, we are using VirtualBox,
as it is a simple to use, open-source virtualization solution.

In case you do not have VirtualBox installed, use this step-by-step VirtualBox installation guide.

STEP 1: Download Kali Linux ISO Image

On the official Kali Linux website downloads section, you can find Kali Linux .iso images.
These images are uploaded every few months, providing the latest official releases.

Navigate to the Kali Linux Downloads page and find the packages available for download.
Depending on the system you have, download the 64-Bit or 32-Bit version.
STEP 2: Create Kali Linux VirtualBox Container

After downloading the .iso image, create a new virtual machine and import Kali as its OS.

1. Launch VirtualBox Manager and click the New icon.

2. Name and operating system. A pop-up window for creating a new VM appears. Specify
a name and a destination folder. The Type and Version change automatically, based on the name
you provide. Make sure the information matches the package you downloaded and click Next.

3. Memory size. Choose how much memory to allocate to the virtual machine and click Next.
The default setting for Linux is 1024 MB. However, this varies depending on your individual
needs.

4. Hard disk. The default option is to create a virtual hard disk for the new VM. Click Create to
continue. Alternatively, you can use an existing virtual hard disk file or decide not to add one at
all.
5. Hard disk file type. Stick to the default file type for the new virtual hard
disk, VDI (VirtualBox Disk Image). Click Next to continue.

6. Storage on a physical hard disk. Decide between Dynamically allocated and Fixed size. The
first choice allows the new hard disk to grow and fill up space dedicated to it. The second, fixed
size, uses the maximum capacity from the start. Click Next.

7. File location and size. Specify the name and where you want to store the virtual hard disk.
Choose the amount of file data the VM is allowed to store on the hard disk. We advise giving it
at least 8 gigabytes. Click Create to finish.

Now you created a new VM. The VM appears on the list in the VirtualBox Manager.
STEP 3: Configure Virtual Machine Settings

The next step is adjusting the default virtual machine settings.

1. Select a virtual machine and click the Settings icon. Make sure you marked the correct VM
and that the right-hand side is displaying details for Kali Linux.
2 In the Kali Linux – Settings window, navigate to General > Advanced tab. Change
the Shared Clipboard and Drag‘n‘Drop settings to Bidirectional. This feature allows you
to copy and paste between the host and guest machine.

3 Go to System > Motherboard. Set the boot order to start from Optical, followed by Hard
Disk. Uncheck Floppy as it is unnecessary.

4 Next, move to the Processor tab in the same window. Increase the number of processors
to two (2) to enhance performance.
5 Finally, navigate to Storage settings. Add the downloaded Kali image to a storage
device under Controller: IDE. Click the disk icon to search for the image. Once
finished, close the Settings window.

6 Click the Start icon to begin installing Kali.


STEP 4: Installing and Setting Up Kali Linux

After you booted the installation menu by clicking Start, a new VM VirtualBox window appears
with the Kali welcome screen.

Select the Graphical install option and go through the following installation steps for setting up
Kali Linux in VirtualBox.

1. Select a language:Choose the default language for the system (which will also be the
language used during the installation process).

2. Select your location.:Find and select your country from the list (or choose ―other‖).

3. Configure the keyboard: Decide which keymap to use. In most cases, the best option is to
select American English.

4. Configure the network.:First, enter a hostname for the system and click Continue.
5. Next, create a domain name (the part of your internet address after your hostname). Domain
names usually end in .com, .net, .edu, etc. Make sure you use the same domain name on all your
machines.

6. Set up users and passwords. Create a strong root password for the system administrator
account.

7. Configure the clock. Select your time zone from the available options.

8. Partition disks. Select how you would like to partition the hard disk. Unless you have a good
reason to do it manually, go for the Guided –use entire disk option.
9. Then, select which disk you want to use for partitioning. As you created a single virtual hard
disk in Step 3: Adjust VM Settings, you do not have to worry about data loss. Select the only
available option – SCSI3 (0,0,0) (sda) – 68.7 GB ATA VBOK HARDDISK (the details after the
dash vary depending on your virtualization software).

10. Next, select the scheme for partitioning. If you are a new user, go for All files in one
partition.

11. The wizard gives you an overview of the configured partitions. Continue by navigating
to Finish partitioning and write changes to disk. Click Continue and confirm with Yes.

12. The wizard starts installing Kali. While the installation bar loads, additional configuration
settings appear.
13. Configure the package manager. Select whether you want to use a network mirror and
click Continue. Enter the HTTP proxy information if you are using one. Otherwise, leave the
field blank and click Continue again.

14. Install the GRUB boot loader on a hard disk. Select Yes and Continue. Then, select a boot
loader device to ensure the newly installed system is bootable.

15. Once you receive the message Installation is complete, click Continue to reboot your VM.

With this, you have successfully installed Kali Linux on VirtualBox. After rebooting, the Kali
login screen appears. Type in a username (root) and password you entered in the previous steps.

Finally, the interface of Kali Linux appears on your screen.


2. BASICS OF BASH SCRIPTING ON KALI LINUX

AIM:
To explore kali linux and bash scripting.

INTRODUCTION:

When we are talking about Linux and Terminal, we can't left Bash scripting. Bash scripting
will be very helpful to become a cybersecurity expert, we can automate payloads and other tasks.
On our this article we are gonna talk about 'Bash Scripting' and how to write accurate scripts on
Linux.

The GNU Bourne-Again Shell (Bash) is a powerful tool and scripting engine. We can do
automate many tasks on command-line. In our this guide we are learning Bash scripting and
know some practical use case. Here we assume that we know about the Linux files, which
discussed on previous article.

Introduction to Bash Scripting

A Bash script is a plain-text file that contains a series of commands that are executed as if they
had been typed on terminal window. In general, Bash scripts have an optional extension
of .sh for identification (but it can be run without extension name), begin wit #!/bin/bash and
must have executable permission set before the script can be executed. Let's write a simple
"Hello World" Bash script on a new file using any text editor, named it hello-world.sh and write
the following contains inside it:

#!/bin/bash

# Hello World on Bash Script.

echo "Hello World!"

Then save and close it. In the above script we used some components which we need to explain:

 Line 1: #! is known as shebang, and it is ignored by the Bash interpreter. The second
part, /bin/bash, is absolute path to the interpreter, which is used to run the script. For this we can
identify that, this a "Bash script". There are various types of shell scripts like "zsh" and "C Shell
script" etc.
 Line 2: # is used to add a comment. Hashed (#) tests will be ignored by interpreter. This
comments will help us to take special notes for the scripts.
 Line 3: echo "Hello World!" uses the echo Linux command utility to print a given string to the
terminal, which in this case is "Hello World!".

Now we need to make this script executable by running following command:

chmod +x hello-world.sh

In the following screenshot we can see the output of the above command:

Now we can run the script by using following command:


bash hello-world.sh

We can see that our script shows output of "Hello World!" on our terminal as we can see in the
following screenshot:

The chmod command, with +x flag is used to make the bash script executable and bash along
with scriptname.sh we can run it. We can ./scriptname.sh to run the script. This was our first
Bash script. Let's explore Bash in a bit more detail.

Variables

Variables are used for temporarily store data. We c an declare a variable to assign a value inside
it, or read a variable, which will ""expand" or "resolve" it to its store value.

We can declare variable values in various ways. The easiest method is to set the value directly
with a simple name=value declaration. We should remember that there are no spaces between or
after the "=" sign.

On our terminal we can run following command:

name=Kali
Then we again run another command:

surname=Linux

Variable declaring is pointless unless we can use/reference it. To do this, we precede the variable
with $ character. Whenever Bash see this ($) syntax in a command, it replaces the variable name
with it's value before executing the command. For an example we can echo both this variable by
using following command:

echo $name $surname

In the following screenshot we can the output shows the values of the variables:

Variables names might be uppercase, lowercase or a mixture of both. Bash is case sensitive, so
we must be consistent when declaring and expending variables. The good practice to use
descriptive variable names, which make our script much easier for others to understand and
maintain.Bash interprets certain characters in specific ways. For example, the following
declaration demonstrates an improper multi-value variable declaration

:hello=Hello World

In the following screenshot, we can see the output.

This was not necessarily what we expected. To fix this type of error we can use single quote (')
or double quote (") to enclose our text. Here we need to know that Bash treats single quotes and
double quotes differently. When Bash meets the single quotes, Bash interprets every enclosed
character literally. When enclosed in double quotes, all characters are viewed literally expect "$"
and "\" meaning variables will be expended in an initial substitution pass on the enclosed text.

In the case of above scenario we the following will help to clarify:

hello='Hello World'
Now we can print this variable using echo, shown in following screenshot:

In the above example, we had used the single quote (') to use the variable. But when we use the
hello variable with something other then we need to use double quote ("), we can see following
for better understanding:

hello2="Hi, $hello"

Now we can see the print (echo) of new $hello2 variable on the following screenshot:

We can also set the value of the variable to the result of a command or script. This is also known
as command substitution, which allows us to take the output of a command (what would
normally be printed to the screen) and have it saved as the value of a variable.

To do this, place the variable name in parentheses "()", preceded by a "$" character:

user=$(whoami)

echo $user

Here we assigned the output of the whoami command the user variable. We then displayed it's
value by echo. In the following screenshot we can see the output of the above command:
An alternative syntax for command substitution using backtick (`), as we can see in the following
commands:

user2=`whoami`

echo $user2

This backtick method is older and typically discouraged as there are differences in how the two
methods of command substitution behave. It is also important to note that command substitution
happens in a subshell and changes to variables in the subshell will not alter variables from the
master process.

Arguments

Not all Bash scripts require arguments. However, it is extremely important to understand how
they are interpreted by bash and how to use them. We have already executed Linux commands
with arguments. For example, when we run command ls -l /var/log, both -l and /var/log are
arguments to the ls command.

Bash scripts are not different, we can supply command-line arguments and use them in our
scripts. For an example we can see following screenshot:
In the above screenshot, we have created a simple Bash script, set executable permissions on it,
and then ran it with two arguments. The $1 and $2 variables represents the first and second
arguments passed to the script. Let's explore a few special Bash variables:

Variable Name Description

$0 The name of the Bash script

$1 - $9 The first 9 arguments to the Bash script

$# Number of arguments passed to the Bash script

$@ All arguments passed to the Bash script

$? The exit status of the most recently run process

$$ The process id of the current script

$USER The username of the user running the script

$HOSTNME The hostname of the machine


$RANDOM A random number

$LINENO The current line number in the script

Some of these special variable can be useful when debugging a script. For example, we might be
able to obtain the exit status of a command to determine whether it was successfully executed or
not.

Reading User Input

Command-line arguments are a form of user input, but we can also capture interactive user input
during a script is running with the read command. We are going to use read to capture user input
and assign it to a variable, as we did in the following screenshot:

We can alter the behavior of the read command with various command line options. Two of the
most commonly flags include -p, which allows us to specify a prompt, and -s, which makes the
user input silent/invisible (might be helpful for credentials). We can see an example in the
following screenshot:
If, Else, Elif

If, Else, Elif are considered as most common conditional statements, which allow us to show
different actions based on different conditions.

The if statement is quite simple. This checks to see if a condition is true, but it requires a very
specific syntax. We need to be careful to attention to this syntax, especially the use of required
spaces.

In the above screenshot if "some statement" is true the script will "do some action", these action
can be any command between then and fi. Lets look at an actual example.
On the above example, we used an if statement to check the age inputted by a user. If the user's
age was less than (-it) 12, the script would output a warning message.

Here the square brackets ([ &]) in the if statement above are originally reference to the test
command. This simply means we can use all of the operators that are allowed by the test
command. Some of the widely used operators include:

 -n VAR - True if the length of VAR is greater than zero.


 -z VAR - True if the VAR is empty.
 STRING1 = STRING2 - True if STRING1 and STRING2 are equal.
 STRING1 != STRING2 - True if STRING1 and STRING2 are not equal.
 INTEGER1 -eq INTEGER2 - True if INTEGER1 and INTEGER2 are equal.
 INTEGER1 -gt INTEGER2 - True if INTEGER1 is greater than INTEGER2.
 INTEGER1 -lt INTEGER2 - True if INTEGER1 is less than INTEGER2.
 INTEGER1 -ge INTEGER2 - True if INTEGER1 is equal or greater than INTEGER2.
 INTEGER1 -le INTEGER2 - True if INTEGER1 is equal or less than INTEGER2.
 -h FILE - True if the FILE exists and is a symbolic link.
 -r FILE - True if the FILE exists and is readable.
 -w FILE - True if the FILE exists and is writable.
 -x FILE - True if the FILE exists and is executable.
 -d FILE - True if the FILE exists and is a directory.
 -e FILE - True if the FILE exists and is a file, regardless of type (node, directory, socket, etc.).
 -f FILE - True if the FILE exists and is a regular file (not a directory or device).

We had applied these things to the above if statement example and we remove the square
brackets using test string. But we think that the square bracket makes the code more readable.

We also can perform a particular set of actions if a statement is true and other statement is false.
To do this, we can use the else statement, which has the following syntax:

Now for an example we expand our previous age example including our else statement, as shown
in the following screenshot:
We can easily notice that the else statement was executed when the inputted age was not less
than 12.

We can add more arguments to the statements with the help of elif statement. The example will
be following:

Let's extend our age example with elif statement in the following screenshot:
On the above example we can see that the code is little bit complex compared to if and else. Here
when the user inputs the age grater than 60 elif statement will be executed and output the "Salute
..." message.

These are the basic uses of bash. Here we learn some simple bash scripts. There are lots of more
topic to cover but we don't want to make the article longer. If you want next part
please Tweet us.

In today's article we learned Basics of Bash scripting on our Kali Linux. Not only Kali Linux this
tutorial will work on any Debian based Linux distro like Ubuntu, Linux Mint etc.
(OR)

BASH SCRIPTING – INTRODUCTION TO BASH AND BASH SCRIPTING


Bash is a command-line interpreter or Unix Shell and it is widely used in GNU/Linux
Operating System. It is written by Brian Jhan Fox. It is used as a default login shell for most
Linux distributions. Scripting is used to automate the execution of the tasks so that humans do
not need to perform them individually. Bash scripting is a great way to automate different
types of tasks in a system. Developers can avoid doing repetitive tasks using bash scripting.

Bash scripting supports variables, conditional statements, and loops just like programming
languages. Below are some of the applications of Bash Scripts –

APPLICATIONS OF BASH SCRIPTS:


 Manipulating files
 Executing routine tasks like Backup operation
 Automation

ADVANTAGES OF BASH SCRIPTS:


 It is simple.
 It helps to avoid doing repetitive tasks
 Easy to use
 Frequently performed tasks can be automated
 A sequence of commands can be run as a single command.

DISADVANTAGES OF BASH SCRIPTS:


 Any mistake while writing can be costly.
 A new process launched for almost every shell command executed
 Slow execution speed
 Compatibility problems between different platforms
HOW TO WRITE BASH SCRIPTS?
To write a Bash Script we will follow the steps –

 First, we will create a file with the .sh extension.


 Next, we will write down the bash scripts within it
 After that, we will provide execution permission to it.
To create and write a file with the .sh extension we can use gedit text editor. The command for it
will be –
gedit scriptname.sh

The first line of our script file will be –

#!/bin/bash

This will tell, the system to use Bash for execution. Then we can write our own scripts.

Let‘s write down just a simple script that will print some lines in the terminal. The code for it
will be –

#!/bin/bash

echo "Hello, GeeksforGeeks"

Now we will save the file and provide the execution permission to it. To do so use the following
command –

chmod +x scriptname.sh

Next to execute the following script we will use the following command –

./scriptname.sh

Here is the terminal shell pictorial depiction after executing the above commands as follows:
Here the script file name is gfg.sh.

Now we can also write more complicated commands using Bash Scripts. Here is an
example below in which we are using a condition statement –

EXAMPLE SCRIPT:
#!/bin/bash

Age=17

if [ "$Age" -ge 18 ]; then

echo "You can vote"

else

echo "You cannot vote"

fi
OUTPUT:
You cannot vote

Here is the terminal shell pictorial depiction after executing the above script as follows:

In the above way, we can execute multiple Bash commands all at once.

Now Let‘s look into the other important concepts related to Bash Scripting.
File Names and Permissions
In the above example, we have saved the file using gfg.sh name and also provided execute
permission using chmod command. Now, let‘s understand why we have done that.
While writing bash scripts we should save our file with the .sh extension, so that the Linux
system can execute it. When we first create a file with the .sh extension, it doesn‘t have any
execute permission and without the execute permission the script will not work. So, we should
provide execute permission to it using the chmod command.
The filename of a bash script can be anything but by convention, it is recommended to use
snake case ( my_bash_script.sh ) or hyphens ( my-bash-script.sh ) for naming a script file.

VARIABLES
We can use variables in bash scripting. Below is a sample program to understand the usage of
variables in Bash scripts.

EXAMPLE SCRIPT:
Name="SATYAJIT GHOSH"

Age=20

echo "The name is $Name and Age is $Age"


OUTPUT OF VARIABLES:
The name is SATYAJIT GHOSH and Age is 20

So, here is have declared two variables Name and another one is Age. These variables are
accessible using $Name and $Age. That means, we can declare a variable in a bash script
using VariableName=Value and can access it using $VariableName. Here is the terminal shell
pictorial depiction after executing the above script as follows:

There are two types of variables present within Bash Scripting. Conventionally, If a variable, is
declared inside a function then it is generally a local variable and if it is declared outside then it
is a global variable. In the case of a bash script, this concept is a little bit different, here any
variable whether it is written inside a function or outside a function by default is a global
variable. If we want to make a local variable then we need to use the keyword ―local‖.
Note: It is best practice to always use a local variable inside a function to avoid any unnecessary
confusion.
An example of the same is given below –

EXAMPLE SCRIPT:
#!/bin/bash
var1="Apple" #global variable
myfun(){
local var2="Banana" #local variable
var3="Cherry" #global variable
echo "The name of first fruit is $var1"
echo "The name of second fruit is $var2"
}
myfun #calling function
echo "The name of first fruit is $var1"
#trying to access local variable
echo "The name of second fruit is $var2"
echo "The name of third fruit is $var3"
OUTPUT OF LOCAL AND GLOBAL VARIABLES:
The name of first fruit is Apple
The name of second fruit is Banana
The name of first fruit is Apple
The name of second fruit is
The name of third fruit is Cherry
Here in this above example, var2 is a local variable, so when we are accessing it from the
function it is doing fine but when we are trying to access it outside the function, it is giving us an
empty result in the output.
On the other hand, unlike programming languages, even though var3 is defined inside a
function still it is acting as a global variable and it can be accessed outside the function. Below is
the terminal shell depiction after executing the script –
INPUT AND OUTPUT
Input & output are fundamental concepts for shell scripting. A script can take one or
more inputs and can also produce zero or many outputs. It may even produce some errors. Let‘s
understand this with an example –

EXAMPLE SCRIPT:
echo "Enter filename"
read filename
if [ -e $filename ]
then
echo "$filename is exits on the directory"
cat $filename
else
cat > $filename
echo "File created"
fi

OUTPUT OF INPUT & OUTPUT:


First time:

Enter filename

geeks.txt

Hello Geek

File created

Second time:

Enter filename

geeks.txt

geeks.txt is exits on the directory

Hello Geek
So, in this above example the first time, the script could not find any file with that file
name, and the else block gets executed. It created the file and put some data into that file. When
we run it a second time with the same file name, then it finds the file. So, is the if block gets
executed and that displays the contents of the file. Reading the file contents is input and on the
first time putting data into the file is considered to be output. Here we have used > for storing the
content in a file. The > notation is used to redirect stdout to a file. On the other hand, we can
use 2> notation to redirect stderr, and &> to redirect both stdout and stderr.
Below is the terminal shell pictorial depiction after executing the following script –
FUNCTIONS
In programming, A function is a block of code that performs some tasks and it can be
called multiple times for performing tasks. The simplest example of the use of function in Bash
scripting can be given as –

EXAMPLE SCRIPT:
#!/bin/bash

#It is a function

myFunction () {

echo Hello World from GeeksforGeeks

#function call

myFunction

OUTPUT OF FUNCTIONS:
Hello World from GeeksforGeeks

The above example shows a function that prints something when called.

So, the basic syntax for writing functions within a Bash Script will be –

SYNTAX OF FUNCTIONS:
#for defining

function_name(){

commands

.....

function_name # for calling

Besides this, we can also have functions with passing arguments and with return values.
DECISION MAKING
In programming, Decision Making is one of the important concepts. The programmer
provides one or more conditions for the execution of a block of code. If the conditions are
satisfied then those block of codes only gets executed.

Two types of decision-making statements are used within shell scripting. They are –

1. IF-ELSE STATEMENT:

If else statement is a conditional statement. It can be used to execute two different codes
based on whether the given condition is satisfied or not.

There are a couple of varieties present within the if-else statement. They are –

 if-fi
 if-else-fi
 if-elif-else-fi
 nested if-else
The syntax for the simplest one will be –

SYNTAX OF IF-ELSE STATEMENT:


if [ expression ]; then

statements

fi

EXAMPLE SCRIPT:
Name="Satyajit"

if [ "$Name" = "Satyajit" ]; then

echo "His name is Satyajit. It is true."

Fi
OUTPUT OF IF-ELSE STATEMENT:
His name is Satyajit. It is true.

In the above example, during the condition checking the name matches and the condition
becomes true. Hence, the block of code present within the if block gets executed. In case the
name doesn‘t match then will not have an output. Below is the terminal shell pictorial depiction
after executing the following script –

2. CASE-SAC STATEMENT:
case-sac is basically working the same as switch statement in programming. Sometimes if
we have to check multiple conditions, then it may get complicated
using if statements. At those moments we can use a case-sac statement. The syntax will be –

SYNTAX OF CASE-SAC STATEMENT:


case $var in

Pattern 1) Statement 1;;

Pattern n) Statement n;;

esac

EXAMPLE SCRIPT:
Name="Satyajit"

case "$Name" in

#case 1

"Rajib") echo "Profession : Software Engineer" ;;

#case 2
"Vikas") echo "Profession : Web Developer" ;;

#case 3

"Satyajit") echo "Profession : Technical Content Writer" ;;

esac

OUTPUT OF CASE-SAC STATEMENT:


Profession : Technical Content Writer

In the above example, the case-sac statement executed the statement which is a part of the
matched pattern here the ‗Name‘. Below is the terminal shell pictorial depiction after executing
the following script –

STRING AND NUMERIC COMPARISONS


The string comparison means in the shell scripting we can take decisions by doing comparisons
within strings as well. Here is a descriptive table with all the operators –

Operator Description

== Returns true if the strings are equal

!= Returns true if the strings are not equal


Operator Description

-n Returns true if the string to be tested is not null

-z Returns true if the string to be tested is null

Arithmetic operators are used for checking the arithmetic-based conditions. Like less than,
greater than, equals to, etc. Here is a descriptive table with all the operators –

Operator Description

-eq Equal

-ge Greater Than or Equal

-gt Greater Than

-le Less Than or Equal

-lt Less Than

-ne Not Equal

Below is a simple example of the same –


EXAMPLE SCRIPT:
if [ 10 -eq 10 ];then

echo "Equal"

fi

if [ 'Geeks' == 'Geeks' ];

then

echo "same" #output

else

echo "not same"

fi

OUTPUT OF STRING AND NUMERIC COMPARISONS:


Equal

same

In this example first one (-eq )is a numeric comparison that checks for equality. The second one
( == ) is also check for equality but in strings. Below is the terminal shell pictorial depiction
after executing the following script –
3: PERFORM OPEN SOURCE INTELLIGENCE GATHERING USING
NETCRAFT, WHOIS LOOKUPS, DNS RECONNAISSANCE,
HARVESTER AND MALTEGO

AIM:

Perform open source intelligence gathering using netcraft, whois lookups, dns
reconnaissance, harvester and maltego.

Introduction:

 Open-Source Intelligence (OSINT) Meaning


 History of OSINT
 How Attackers and Defenders Use OSINT
 OSINT Gathering Techniques
 Artificial Intelligence: The Future of OSINT?
 OSINT Tools
 Open Source Investigation Best Practices
 Imperva Application Protection Powered by Threat Intelligen

Open-Source Intelligence (OSINT)


Open Source Intelligence (OSINT) is a method of gathering information from public or
other open sources, which can be used by security experts, national intelligence agencies, or
cybercriminals. When used by cyber defenders, the goal is to discover publicly available
information related to their organization that could be used by attackers, and take steps to prevent
those future attacks.
OSINT leverages advanced technology to discover and analyze massive amounts of data,
obtained by scanning public networks, from publicly available sources like social media
networks, and from the deep web—content that is not crawled by search engines, but is still
publicly accessible.
OSINT tools may be open source or proprietary: the distinction should be made between
open source code and open source content. Even if the tool itself is not open source, as an
OSINT tool, it provides access to openly available content, known as open source intelligence.
History of OSINT
The term OSINT was originally used by the military and intelligence community, to
denote intelligence activities that gather strategically important, publicly available information
on national security issues.
In the cold war era, espionage focused on obtaining information via human sources
(HUMINT) or electronic signals (SIGINT), and in the 1980s OSINT gained prominence as an
additional method of gathering intelligence.
With the advent of the Internet, social media, and digital services, open source
intelligence grants access to numerous resources to gather intelligence about every aspect of an
organization‘s IT infrastructure and employees. Security organizations are realizing that they
must collect this publicly available information, to stay one step ahead of attackers.
A CISO‘s primary goal is to find information that could pose a risk to the organization.
This allows CISOs to reduce risk before an attacker exploits a threat. OSINT should be used in
combination with regular penetration testing, in which information discovered via OSINT is used
to simulate a breach of organizational systems.
How Attackers and Defenders Use OSINT
There are three common uses of OSINT: by cybercriminals, by cyber defenders, and by
those seeking to monitor and shape public opinion.
HOW SECURITY TEAMS USE OSINT
For penetration testers and security teams, OSINT aims to reveal public information
about internal assets and other information accessible outside the
organization. Metadata accidentally published by your organization may contain sensitive
information.
For example, useful information that can be revealed through OSINT includes open
ports; unpatched software with known vulnerabilities; publicly available IT information such as
device names, IP addresses and configurations; and other leaked information belonging to the
organization.
Websites outside of your organization, especially social media, contain huge amounts of
relevant information, especially information about employees. Vendors and partners may also be
sharing specific details about an organization‘s IT environment. When a company acquires other
companies, their publicly available information becomes relevant as well.
HOW THREAT ACTORS USE OSINT
A common use of OSINT by attackers is to retrieve personal and professional
information about employees on social media. This can be used to craft spear-
phishing campaigns, targeted at individuals who have privileged access to company resources.
LinkedIn is a great resource for this type of open source intelligence, because it reveals
job titles and organizational structure. Other social networking sites are also highly valuable for
attackers, because they disclose information such as dates of birth, names of family members and
pets, all of which can be used in phishing and to guess passwords.
Another common tactic is to use cloud resources to scan public networks for unpatched
assets, open ports, and misconfigured cloud datastores. If an attacker knows what they are
looking for, they can also retrieve credentials and other leaked information from sites like
GitHub. Developers who are not security conscious can embed passwords and encryption keys in
their code, and attackers can identify these secrets through specialized searches.
OTHER USES OF OSINT
In addition to cybersecurity, OSINT is also frequently used by organizations or
governments seeking to monitor and influence public opinion. OSINT can be used for marketing,
political campaigns, and disaster management.

OSINT Gathering Techniques


Here are three methods commonly used to gain open intelligence data.
PASSIVE COLLECTION
This is the most commonly used way to gather OSINT intelligence. It involves scraping
publicly available websites, retrieving data from open APIs such as the Twitter API, or pulling
data from deep web information sources. The data is then parsed and organized for consumption.
SEMI-PASSIVE
This type of collection requires more expertise. It directs traffic to a target server to
obtain information about the server. Scanner traffic must be similar to normal Internet traffic to
avoid detection.
ACTIVE COLLECTION
This type of information collection interacts directly with a system to gather information
about it. Active collection systems use advanced technologies to access open ports, and scan
servers or web applications for vulnerabilities.
This type of data collection can be detected by the target and reveals the reconnaissance
process. It leaves a trail in the target‘s firewall, Intrusion Detection System (IDS), or Intrusion
Prevention System (IPS). Social engineering attacks on targets are also considered a form of
active intelligence gathering.
Artificial Intelligence: The Future of OSINT?
OSINT technology is advancing, and many are proposing the use of artificial intelligence
and machine learning (AI/ML) to assist OSINT research.
According to public reports, government agencies and intelligence agencies are already
using artificial intelligence to gather and analyze data from social media. Military organizations
are using AI/ML to identify and combat terrorism, organized cybercrime, false propaganda, and
other national security concerns on social media channels.
As AI/ML techniques become available to the private sector, they can help with:

 Improving the data collection phase—filtering out noise and prioritizing data
 Improving the data analysis phase—correlating relevant information and identifying useful
structures
 Improving actionable insights—AI/ML analysis can be used to review far more raw data than
human analysts can, deriving more actionable insights from the available data.
OSINT Tools
Here are some of the most popular OSINT tools.
MALTEGO
Maltego is part of the Kali Linux operating system, commonly used by network
penetration testers and hackers. It is open source, but requires registration with Paterva, the
solution vendor. Users can run a ―machine‖, a type of scripting mechanism, against a target,
configuring it according to the information they want to collect.

MAIN FEATURES INCLUDE:

 Built-in data transformations.


 Ability to write custom transformations.
 Built-in footprints that can collect information from sources and create a visualization of data
about a target.
SPIDERFOOT
Spiderfoot is a free OSINT tool available on Github. It integrates with multiple data sources, and
can be used to gather information about an organization including network addresses, contact
details, and credentials.
MAIN FEATURES INCLUDE:

 Gathers and analyzes network data including IP addresses, classless inter-domain routing (CIDR)
ranges, domains and subdomains.
 Gathers email addresses, phone numbers, and other contact details.
 Collects usernames for accounts operated by an organization.
 Collects Bitcoin addresses.
SPYSE
Spyse is an ―Internet assets search engine‖, designed for security professionals. It collects data
from publicly available sources, analyzes them, and identifies security risks.
MAIN FEATURES INCLUDE:

 Collects data from websites, website owners, and the infrastructure they are running on
 Collects data from publicly exposed IoT devices
 Identifies connections between entities
 Reports on publicly exposed data that represents a security risk
INTELLIGENCE X
Intelligence X is an archival service that preserves historical versions of web pages that
were removed for legal reasons or due to content censorship. It preserves any type of content, no
matter how dark or controversial. This includes not only data censored from the public Internet
but also data from the dark web, wikileaks, government sites of nations known to engage
in cyber attacks, and many other data leaks.
MAIN FEATURES INCLUDE:

 Search on email addresses or other contact details.


 Advanced search on domains and URLs.
 Search for IPs and CIDR ranges, with support for IPv4 and IPv6.
 Search for MAC addresses and IPFS Hashes.
 Search for financial data such as account numbers and credit card numbers
 Search for personally identifiable information
 Darknet: Tor and I2P
 Wikileaks & Cryptome
 Government sites of North Korea and Russia
 Public and Private Data Leaks
 Whois Data
 Dumpster: Everything else
 Public Web
BUILTWITH
BuiltWith maintains a large database of websites, which includes information on the
technology stacks used by each site. You can combine BuiltWith with security scanners to
identify specific vulnerabilities affecting a website.
MAIN FEATURES INCLUDE:

 Reporting on the content management system (CMS) in use by a website, its version, and plugins
currently in use.
 Reporting on other infrastructure components used by a website, such as a CDN.
 Providing a list of JavaScript and CSS libraries used by the website.
 Providing information about the web server running the website.
 Providing details of analytics and tracking tools deployed by a website.
SHODAN
Shodan is a security monitoring solution that makes it possible to search the deep web
and IoT networks. It makes it possible to discover any type of device connected to a network,
including servers, smart electronics devices, and webcams.
MAIN FEATURES INCLUDE:

 Easy to use search engine interface.


 Provides information on devices operating on protocols like HTTP, SSH, FTP, SNMP, Telnet,
RTSP, and IMAP.
 Results can be filtered and ordered by protocol, network ports, region, and operating system.
 Access to a huge range of connected devices, including home appliances and public utilities such
as traffic lights and water control systems.
HAVEIBEENPWNED
HaveIbeenPwned is a service that can be used directly by consumers who were impacted
by data breaches. It was developed by security researcher Troy Hunt.
MAIN FEATURES INCLUDE:

 Identifying if an individual email address was compromised in any historical breach.


 Checking accounts on popular services like LastFM, Kickstarter, WordPress.com, and LinkedIn
for exposure to past data breaches.
GOOGLE DORKING
Google dorking is not exactly a tool – it is a technique commonly used by security
professionals and hackers to identify exposed private data or security vulnerabilities via the
Google search engine.
Google has the world‘s largest database of Internet content, and it provides a range of
advanced search operators. Using these search operators it is possible to identify content that can
be useful to attackers.
Here are operators commonly used to perform Google Dorking:

 Filetype – enables finding exposed files with a file type that can be exploited
 Ext – similarly, finds exposed files with specific extensions that can be useful in attack (for
example .log)
 Intitle/inurl – looks for sensitive information in a document title or URL. For example, any
URL containing the term ―admin‖ could be useful to an attacker.
 Quotes – the quote operator enables searching for a specific string. Attackers can search for a
variety of strings that indicate common server issues or other vulnerabilities.
Open Source Investigation Best Practices
Here are best practices that can help you use OSINT more effectively for cyber defense.
DISTINGUISH BETWEEN DATA AND INTELLIGENCE
Open source data (OSD) is raw, unfiltered information available from public sources.
This is the input of OSINT, but in itself, it is not useful. Open source intelligence (OSINT) is a
structured, packaged form of OSD which can be used for security activity.
To successfully practice OSINT, you should not focus on collecting as much data as
possible. Focus on identifying the data needed for a specific investigation, and refine your search
to retrieve only the relevant information. This will let you derive useful insights at lower cost and
with less effort.
CONSIDER COMPLIANCE REQUIREMENTS
Most organizations are covered by the General Data Protection Regulation (GDPR) or
other privacy regulations. OSINT very commonly collects personal data, which can be defined
as personally identifiable information (PII). Collecting, storing, and processing this data can
create a compliance risk for your organization.
In addition, if you discover criminal intent in an OSINT investigation, there may be
specific legal requirements for exposing this data. For example, in the UK, exposing information
that can tip off an individual under investigation for money laundering can lead to unlimited
fines and prison time.
BE ETHICAL
OSINT relies on publicly accessible data, but the use of this data can impact people, both
in your organization and outside it. When you collect data, do not only consider your
investigative needs, but also the ethical and regulatory impact of the data. Limit data collection
to a minimum that can help you meet your goals without violating the rights of employees or
others.
Letting technology collect data or scan systems ―on autopilot‖ will often result in
unethical or illegal data collection. A key part of ethical OSINT is to ensure data collection is
controlled by humans, with effective collaboration between all stakeholders. Everyone involved
in the OSINT project should understand ethical and legal constraints, and should work together
to avoid privacy issues and other ethical concerns.
Imperva Application Protection Powered by Threat Intelligence
Imperva provides comprehensive protection for applications, APIs, and microservices,
which builds on multiple threat intelligence sources including OSINT:
Web Application Firewall – Prevent attacks with world-class analysis of web traffic to your
applications.
Runtime Application Self-Protection (RASP) – Real-time attack detection and prevention
from your application runtime environment goes wherever your applications go. Stop external
attacks and injections and reduce your vulnerability backlog.
API Security – Automated API protection ensures your API endpoints are protected as they are
published, shielding your applications from exploitation.
Advanced Bot Protection – Prevent business logic attacks from all access points – websites,
mobile apps and APIs. Gain seamless visibility and control over bot traffic to stop online fraud
through account takeover or competitive price scraping.
DDoS Protection – Block attack traffic at the edge to ensure business continuity with guaranteed
uptime and no performance impact. Secure your on premises or cloud-based assets – whether
you‘re hosted in AWS, Microsoft Azure, or Google Public Cloud.
Attack Analytics – Ensures complete visibility with machine learning and domain expertise
across the application security stack to reveal patterns in the noise and detect application attacks,
enabling you to isolate and prevent attack campaigns.
Client-Side Protection – Gain visibility and control over third-party JavaScript code to reduce
the risk of supply chain fraud, prevent data breaches, and client-side attacks.

NETCRAFT TOOL
Netcraft basically u.k based company that monitor almost every website no of fake
services offer them information get technology use any website server and client is very
important tool to use velnerability assessment of the any website
STEP 1
open google chrome
type netcraft.com
No of security services also provides

what's that site running?


STEP 2
*using results from our internet data mining find out the technologies and infrastructure used by
any site.
paste url of the website underlaying technologies
STEP 3
Going to use another website
copy the url paste netcraft.com tool
type: https://www.spcet.ac.in
collect from information underlaying technology
information gathering process
here all the information particular site
4. UNDERSTAND THE NMAP COMMAND D AND SCAN A TARGET
USING NMAP

AIM:
Understand the nmap command d and scan a target using nmap

STEPS:

STEP 1

To Install nmap

Open Terminal

$ sudo apt-get install nmap

STEP 2

$ sudo nmap –help

This command will show all the reflex available nmap tool and nmap command
STEP 3

How to scan network

STEP 4

To scan ip address

STEP5

To scan ip address with range

$ sudo nmap –F 192.168.1.1-30


---Continue

STEP 6

List of ip address want to scan


Nano lists.txt

To create a new file

192.168.1.1

192.168.1.2

192.168.1.3

Ctrl+x

Press Y

$ sudo nmap –iL lists.txt

press enter started to scan

Step7:

How to scan particular port


To scan particular port

$ sudo nmap –p 22 www.google.com

$ sudo nmap –p 22-25 www.google.com

Next

If you want search port name

$ sudo nmap –p http www.google.com


Next

To scan all the port particular Domain

$ sudo nmap –p- www.google.com

All the port available in the particular domain

Step8:

Accuracy

$ sudo nmap –A www.google.com

It will be show lot of results and information


Step 9:

Traceroute

$sudo nmap –traceroute domain name

$sudo nmap –traceroute facebook.com


5. INSTALL METASPLOITABLE2 ON THE VIRTUAL BOX AND
SEARCH FOR UNPATCHED VULNERABILITIES

AIM:
Install metasploitable2 on the virtual box and search for unpatched vulnerabilities.

INTRODUCTION:
Metasploitable2 VM is an ideal virtual machine for computer security training, but it is
not recommended as a base system. Metasploitable2 offers the researcher several opportunities
to use the Metasploit framework to practice penetration testing.

It is Intensely vulnerable linux machine for security training penetration testing.

STEPS:
Step1: go to browser type metasploitable 2 then press first link

Step2: Download the metasploitable 2 setup

Step3: Extract the metasploitable 2 download file then open virtual box create new
machine.

Step4: Select Deban 64bit and install


Step6: Go and select the software

Step7: Select the file click choose …


Step8: Then select on create.

Since metasploitable 2 tool vulnerable linux machine ensure that this is not accessible
outside or host that will connecting host only adapter.

Step9: Go to settings and network


Step10: Now will go to click on start..
Step11: type this cmd.

metasploitable 2 install is done .

metasploitable login: msfadmin

password:msfadmin
Step12: Let see ip address of the machine..

Step13: We successfully installed metasploitable 2 on virtual machine


Step14: Ip address is given here I also called kali linux installed on virtual box weather I can
access metasploitable 2 tool from kali linux

Step15: Then go to browser kali linux

Step16: Enter the ip address for metasploitable 2 tool


Step17: Able to access metasploitable 2 tool from kali linux installsion is fine vulnerable web
application available metasploitable 2

Step18: This vulnerable web application

Step19: The application used for sequence intrusion then go to OWASP


Step20: Particular form using sequential instruction attack
Step21: Click on view account details

Step13: Web application working not fine ..

Step14: How to fix this problem to perform certain step go to metasploitable 2 window
Step15: Run of you commands

Step16: Enter ls command to show


Step17: To change this database name
Step18:

Exit

After performing the changes when reboot apache will be entering this command

Sudo /etc/init.d/reboot

Step19: Done with the process go to kali linux let me check particular error
Step20: That very well say Authentication error

Step21: This error utility vulnerable some perform sequential injeck attack .

Result:
Thus the Install of metasploitable2 on the virtual box and search for unpatched
vulnerabilities is done and executed successfully.

You might also like