[go: up one dir, main page]

0% found this document useful (0 votes)
3 views28 pages

Module 2.1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 28

09

Vulnerabilit
y Scanning
Methods
and
In this chapter you will learn:

■ The importance of asset identification and discovery

■ The industry frameworks that address vulnerability


management

■ The types of nontraditional IT that make up critical


infrastructure

■ The various considerations in vulnerability identification


and scanning
9.1 Asset Discovery
• You cannot protect what you don’t know you have.
• Inventorying assets is a critical aspect of managing vulnerabilities in your information
systems.
• An asset is anything of value to an organization, regardless of whether that value is
expressed in quantifiable terms or qualitative terms.
• Apart from hardware and software, assets include people, partners, equipment,
facilities, reputation, and information.
Asset Mapping Scans and Fingerprinting
• You cannot protect what you don’t know you have.
• Identifying assets is one of the most critical processes in most security programs, such
as asset management, vulnerability identification, and risk management.
• One of the most efficient ways to discover and identify assets is through scanning
them on the network.
• Not every host will respond to every type of network traffic sent to it.
• Fingerprinting is a process for gathering detailed profiles of systems to identify
potential threats, weaknesses.
• Discovery scans can also be a good way of detecting rogue or unauthorized devices
connected to the network
9.2 Industry Frameworks
• Vulnerability Management is also often required by governance, such as laws,
regulations, industry and professional standards, and internal organizational security
policy.
• To assist you in meeting governance requirements with regard to vulnerability
management, several industry and professional frameworks, control sets, and so on
are available to provide more detailed guidance in this area.
Payment Card Industry Data Security
Standard
• The PCI DSS applies to any organization involved in processing credit card payments
using cards branded by the five major issuers: Visa, MasterCard, American Express,
Discover, and JCB.
• PCI DSS is a set of security standards intended to ensure that all companies that
process, store, transmit, or impact the security of cardholder data maintain a secure
environment.
• The PCI DSS requirements are periodically updated, and the standard is in version 4.0,
as of March 2022.
• Requirement 11 of the PCI DSS deals with the obligation to “test security of systems
and networks regularly.”
Center for Internet Security Controls
• The Center for Internet Security (CIS) is a nonprofit organization that, among other
things, maintains a list of 18 critical security controls designed to mitigate the threat
of the majority of common cyberattacks.
• Under each of these 18 control areas are supporting subordinate controls, called
safeguards, that are more granular and provide details on individual activities relevant
to that group.
• Of special interest in this chapter is Control 7 of the CIS (Continuous Vulnerability
Management).
• Safeguards are applicable and assigned to three Implementation Groups (IGs), which
describe organizations based on size and resources.
• These are IG1, IG2 and IG3.
Open Web
Application

Security Project
The Open Web Application
Security Project (OWASP) is
an organization that deals
specifically with web
security issues.
• It is probably best known
for its list of top ten web
application security risks
(known as the OWASP Top
Ten)
ISO/IEC 27000 Series
• The International Organization for Standardization (ISO) and the International
Electrotechnical Commission (IEC) 27000 series serves as industry best practices for
the management of security controls in a holistic manner within organizations around
the world.
• One of these ISO/IEC standards that is particularly relevant to our discussion is 27001.
ISO/IEC 27001 applies to any organization, regardless of size, that wants to formalize
its security activities through the creation of an information security management
system (ISMS).
• ISO/IEC 27001 has 114 controls organized into 14 domains.
9.3 Critical Infrastructure
• Critical infrastructure is the broad name given to the set of technologies used
throughout an entire country that run its core services, such as power, heating, water
treatment, refineries, transportation systems, medical systems, and so on.
• Nontraditional technologies are those that historically were electromechanical in
nature and did not include IT components in them.
• In modern times, these technologies used to connect to electrical components and
machinery that run critical services have been merged with traditional IT such that
they now have CPUs, memory, hard drives, operating systems, and are even
connected to each other via IP-based networks or the larger Internet.
• These systems are now popularly referred to as cyber-physical systems.
Industrial Control
Systems and
Operational
Industrial Technology
control systems (ICSs) are
cyber-physical systems that
enable specialized software to
control their physical behaviors.
Operational Technology
• The three lower levels of the architecture we described earlier are known as the
operational technology (OT) network. The OT network was traditionally isolated from
the IT network that inhabits levels 3 and 4 of the architecture.
• Much of the software that runs an ICS is burned into the firmware of devices such as
programmable logic controllers (PLCs). A popular attack against PLCs happened in
2010 with the Stuxnet worm.
• This is a source of vulnerabilities because updating the ICS software cannot normally
be done automatically or even centrally.
• Another common vulnerability with ICSs is the lack of authentication methods in older
devices.
Supervisory Control
and Data
Acquisition
• A SCADASystems
system is a specific type
of ICS characterized by its ability
to monitor and control devices
throughout large geographic
regions.
• SCADA is most commonly
associated with energy (for
example, petroleum or power)
and utility (for example, water or
sewer) applications.
9.4 Vulnerability Identification and
Scanning
• Vulnerability scanning IS the practice of automating security checks against your
systems.
• Many vulnerability scanners do a tremendous job of identifying weaknesses, but they
are often single-purpose tools.
• Additionally, scanning tools will usually not be able to perform any type of advanced
correlation on their own, and you’ll likely require additional tools and processes to
determine the overall risk of operating the network.
Passive vs. Active Scanning
• Scanning is a method used to get more details about the target network or device by
poking around and taking note of the target’s responses.
• Scanners generally come in three flavors: network mappers, host (or port) scanners,
and web app vulnerability scanners.
• Passive scanning refers to the technique of scanning a host without triggering any
intrusion detection alerts.
• Active scanning refers to intentionally sending specifically crafted traffic to a host to
elicit a particular response.
Mapping/Enumeration
• The goal of network mapping is to understand the topology of the network, including
perimeter networks, demilitarized zones, and key network devices.
• The process used during network mapping is referred to as topology discovery.
• A popular tool for this is Network Mapper, more commonly referred to as nmap.
Port Scanning
• Port scanners are programs designed to probe a host or server to determine what
ports are open.
• This method of enumerating the various services a host offers is one means of service
discovery.
• It enables an attacker to add details to the broad strokes by getting insight into what
services are running on a target.
• OS fingerprinting is not an exact science. You should not conclude that a host is
running a given OS simply because the scanner identified it as such.
Scanning Parameters and Criteria
• When configuring scanning tools, you have a host of different considerations:
Schedule
Scope
Framework Requirements
Vulnerability Feeds
Technical Constraints
Workflow
Tool Updates and Plug-Ins
SCAP
9.3 Types of Vulnerability Scans
• Noncredentialed and Credentialed Scans
Noncredentialed scans look at systems from the perspective of the attacker but
are not as thorough as credentialed scans because there is a great deal of information
that can only be obtained from a system using the appropriate credentials.
•Agent-Based vs. Agentless Scanning
While both agent-based and agentless scans are suitable for determining patch
levels and missing updates, agent-based (or serverless) vulnerability scans are
typically better for scanning mobile devices.
• Internal vs. External Scanning
• Web App Vulnerability Scanning
A web app vulnerability scanner is an automated tool that specifically scans web
applications to determine security vulnerabilities.
Special Considerations for Vulnerability
Scans
• Some special considerations apply to the entire infrastructure that should be carefully
planned when you’re implementing a vulnerability scanning and management
strategy. :
Asset Criticality
Operations
Ongoing Scanning and Continuous Monitoring
Performance
Sensitivity Levels
Network and Security Device Considerations
Regulatory Requirements (Regulatory environment)
Security Baseline Scanning
9.5 Risks Associated with Scanning
Activities
• Risks introduced by vulnerability scanning include network latency due to high
bandwidth usage, the possibility of interrupting business operations by using up
bandwidth, and scans temporarily disabling critical devices, causing slowdowns or
reboots.
• Additionally, there is a risk of not properly configuring vulnerability scans to capture all
the relevant vulnerability information necessary to make risk-based decisions on
vulnerability mitigation.
• Every security decision—including how, when, and where to conduct vulnerability
assessments—must consider the risk implications of activities on the core business of
the organization.
Generating Vulnerability Management
Reports
• Report generation is an important part of the incident response process and is
particularly critical for vulnerability management.
• Getting the pertinent information to the right people in a timely fashion is the key to
capitalizing successfully on vulnerability scans.
• It is important that the right permissions must be set for reports.
Types of Data Included in Reports
• Vulnerability data is very sensitive and should only be reviewed by authorized
technical and management personnel. Ensure that your vulnerability scan reports are
only accessed by people who have a valid need for that information.
• Automated vs. Manual Report Distribution
• Creating reporting templates enables you to rapidly prepare customized reports based
on vulnerability scan results, which can then be forwarded to the necessary points of
contact.
9.6 Software Vulnerability Assessment
Tools and

Techniques
Software vulnerabilities are part of the overall attack surface, and attackers waste no
time discovering what flaws exist.
• Software vulnerability detection methods usually fall into the categories of static
analysis, dynamic analysis, reverse engineering, and fuzzing.
Static Analysis
• Static code analysis is a technique meant to help identify software defects or security
policy violations. It is carried out by examining the code without executing the
program.
• The term static analysis is generally reserved for automated tools that assist analysts
and developers, whereas manual inspection by humans is generally referred to as
code review.
• Static analysis enables developers and security staff to scan their source code quickly
for programming flaws and vulnerabilities.
• Static code analysis cannot usually reveal logic errors or vulnerabilities (that is,
behaviors that are evident only at runtime) and therefore should be used in
conjunction with manual code review to ensure a more thorough evaluation.
Dynamic Analysis
• Dynamic analysis doesn’t really care about what the binary is but rather what the
binary does.
• This method often requires a sandbox in which to execute the software.
• The main advantage of dynamic analysis is that it tends to be significantly faster and
requires less expertise than alternatives. It can be particularly helpful for code that has
been heavily obfuscated or is difficult to interpret.
• Note that this method is also used to examine suspected malware in a protected
environment.
• Static analysis only involves automated and manual reviews of the code itself, without
executing it. Dynamic analysis requires code execution and is more concerned with the
behavior of the code while it is running.
Reverse Engineering
• Reverse engineering is the detailed examination of a product to learn what it does and
how it works.
• A highly skilled analyst will either disassemble or decompile the binary code to
translate its 1’s and 0’s into assembly language or whichever higher level language
the code was created in.
• This enables a reverse engineer to see all possible functions of the software, not just
those exhibited during a limited run in a sandbox.
Fuzzing
• Fuzzing is a technique used to discover flaws and vulnerabilities in software by sending
large amounts of malformed, unexpected, or random data to the target program in
order to trigger failures.
• Fuzzing tools are commonly successful at identifying buffer overflows, denial-of-service
(DoS) vulnerabilities, injection weaknesses, validation flaws, and other activities that
can cause software to freeze, crash, or throw unexpected errors.
• Both reverse engineering and fuzzing are highly specialized techniques and typically
require advanced skill sets and software tools.
• Common examples of fuzzing tools include: untidy, Peach Fuzzer and Microsoft SDL
Fuzzers (MiniFuzz File Fuzzer and the Regex Fuzzer).

You might also like