[go: up one dir, main page]

0% found this document useful (0 votes)
44 views77 pages

Attachment 1

A SIEM tool allows an organization to centralize security logs and events from across their IT infrastructure for real-time monitoring, generating reports, and forensic investigations. It collects data from applications, operating systems, firewalls and other systems to provide visibility into security threats and issues. A SIEM correlates events to detect threats and vulnerabilities, helping organizations comply with security standards and regulations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views77 pages

Attachment 1

A SIEM tool allows an organization to centralize security logs and events from across their IT infrastructure for real-time monitoring, generating reports, and forensic investigations. It collects data from applications, operating systems, firewalls and other systems to provide visibility into security threats and issues. A SIEM correlates events to detect threats and vulnerabilities, helping organizations comply with security standards and regulations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Introduction to SIEM:

logs and events


The SIEM are a very widespread tool in large organizations,
allowing to control all the information that occurs in their
devices. It is very important to know:

IP adress
×
an IP address is a numerical label that identifies, in a logical
and hierarchical way, a physical interface of a device within a
network that uses the IP protocol (Internet Protocol), which
corresponds to the network level of the OSI model. This
number should not be confused with the MAC address, which
is a 48-bit identifier to uniquely identify the network card and
does not depend on the connection protocol used or the
network.
Web server
×
A web server or HTTP server is a computer program that
processes a server-side application, making bidirectional
connections with the client and generating or giving a
response in any language or application on the client
side. The code received by the client is usually compiled and
executed by a web browser. For the transmission of all this
data, the TCP protocol is usually used. For the understanding
of the transmitted data, the HTTP protocol is used, belonging
to the application layer of the OSI model.
Syslog
it is used to send registration messages on an IP computer
network. By syslog both the network protocol and the
application or library that sends the log messages is
known. A log or log message usually has information about
the security of the system, although it may contain any
information. The date and time of the shipment are included
with each message.
III. Why implement a SIEM?
The volume of data generated by the systems of
organizations today is of considerable importance. The data
generated is used as a source of information to monitor any
event that exists in the organization. The adequate control of
this generated data will guarantee the legal and compliance
obligations to face the security risks in IT.

Network map of an organization

Real-time monitoring
The need to include real-time monitoring of the network of
all devices arises from the growing threats to business
continuity. Organizations must be able to make a thorough
analysis of security events generating detailed reports to
comply with legal obligations.

The centralization of security events allows an organization


to access and use devices that are not available and those
that do have direct contact with the organization.

Security
When an organization has a centralized system of events, it
offers a global visibility of the state of security. The
centralization of events also reduces the complexity of the
event management of a heterogeneous IT infrastructure.
Any of the three previous tools facilitates the centralization
of security events, dashboards and generation of detailed
reports. There is also an indirect benefit: collection of logs in
real time, ensuring no manipulation of the logs. Another
benefit is that security operations are managed efficiently
and effectively along with the correlation of security events.

Data retention is one of the problems that resolves the


implementation of an event centralization system. What data
should be retained and how? Logs that imply compliance with
any regulations or compliance will be collected and stored .

information
From these events we can extract important information,
such as system errors that can later be corrected.

In case of an attack, the centralization of events will


facilitate forensic investigations by keeping all the necessary
logs in one place. In the case that it was an internal
investigation, these logs may be from the users of the
corporation who are trying to access restricted sites,
authenticating with credentials of a partner, etc.

The logs of databases, applications, operating systems or


VPN connections are used in a forensic analysis to discover
the real origin of the incident.

IV. What is a log and what is an event?

A log is a record of information of a given system at a


given time. For IT professionals, the log is used to save
information and get information from who ?, how ?, when ?,
where? and because? of an activity of a particular device or
application.

The previous image is a capture of a log produced in an


Apache web server of an organization. You can see the
requests that are made from the same IP 88.198.122.53.

The IP is scanning the entire server in search of a known


resource from which it can obtain information about the
company. The log informs of several aspects: the IP from
which the scan takes place, the date of the scan, the
resource consulted and the code returned by the server
before the request of that resource.
Paired events of the Apache log
Once the log is parsed and normalized to a format, it is
considered an event. Paired logs or events facilitate the
maintenance of information when you have to deploy a
centralized event management tool.

The above image is a snapshot of the Apache Log Viewer


application that allows you to parse the log, showing the
desired events.
V. Differences between SEM,
SIM and SIEM
Today there are three types of tools defined in the event
management systems, allowing all events to be centralized.

• SEM ( Security Event Management ).


The SEM or Security Event Management is a tool or
application capable of managing security events generated
by security systems such as IDS, Firewall, IPS.
The number of events generated by the security systems
makes their management unmanageable. The centralization
of logs or events together with the correlation tools simplify
the management of this information, eliminating unnecessary
records and prioritizing each one of them.

Centralization facilitates the reduction of events, the


elimination of false positives and the prioritization of alerts.

• SIM ( Security Information Management ).


The SIM or Security Information Management manages all the information logs of the
systems, offering a centralized control of all the states of the business process. The SIM basically
collects the logs generated by the applications and operating systems. The level of control that a
SIM provides depends directly on the records that are capable of generating applications or
operating systems.

• SIEM ( Security Information and Event Management ).


SIEM or Security Information and Event Management is the
combination of the two mentioned above. Both are capable of
collecting information from the security systems (SEM) and
the applications and operating systems (SIM), so we could
say that the SIEM is capable of handling all the events
generated in the business process.

SIEM architecture
To familiarize yourself with the SIEM, you should carefully
visualize the general view of a common architecture of a
SIEM. As we can see, the collector collects the logs by asking
several devices or applications or receiving logs from those
devices or applications. Once the log is collected, it is stored
in a database as temporary storage. Through the console
users can access information that has been previously
stored.
Gatherer
There are three ways to collect the logs in an organization:

• Real time.
• In a planned manner.
• Installing an agent the device that allows to obtain the
log.
The collection in real time allows logs to be obtained
immediately. This way of collection is based on generators of
events such as Syslog, SNMP, etc. Basically, when a device
generates a log, it will be sent to the SIEM collector
immediately.

The planned harvest, however, is more complicated. This


process involves that the SIEM must ask the devices during
certain periods of time about the logs. There are many ways
to obtain the logs, for example, exchange of information by
FTP, consulting via HTTP or queries to a database.

The installation of an agent in a particular device or


application allows it to be monitored and, depending on the
information to be collected, the log is sent to the SIEM.

The main storage objectives are:

• Store all the information received from the collector.


• Sort the information received by date, machine,
system / application.
Storage stores both logs and events (normalized logs). This
information is exchanged with the application server to
create alerts, dashboards, reports and correlate events.
How many logs should be stored?

An organization must comply with a variety of regulations


or policies, so that, depending on the regulation that has to
be met, the storage limit will have to be established. For
example, if it is desired to comply with PCI DSS ( Payment
Card Industry Data Security Standard) , it is established that
one year of logs of all the devices or applications that involve
payment by cards must be stored.
Console
It acts as an interface between the centralized event
system and the SIEM operator. Normally, the console is
usually a web server where the alerts and reports that are
created are included. It also has the ability to correlate
events.

VII. Security in logs


It is important, of course, to implement the necessary
security measures for the storage of logs, for the
transmission of logs from the device or source to the SIEM
and access to the SIEM as such. It must be fulfilled that the
data remain intact and that no person can manipulate
them. The confidentiality of the data must also be
guaranteed so that it can not be accessed. Therefore,
appropriate methods or protocols should be used.

VIII. Advantages and


Disadvantages SIEM
SIEM advantages
A SIEM configured and implemented can reduce a lot of
time and money when it comes to an investigation of a
security incident.

The SIEM has the capacity to consolidate multiple logs of


multiple devices / products in a central location,
automatically checking the events against a series of alerts
that are previously defined. By being centralized, the amount
of time required by one or more analysts to develop these
functions manually is reduced.

The SIEM also has the capacity to generate reports and


dashboards, being able to know in what state the
organization is, which allows to improve security.
False sense of security
×
If a SIEM is working and apparently everything is normal
for SIEM operators, you may think that everything is correct
within your network. SIEM operators must ensure that this is
working properly: logs are received from all devices or
sources, alerts work, dashboards display the desired
information.
Storage
In the SIEM you can store any type of log from any source or
device. However, all this information collected and analyzed
by the SIEM requires that storage be increased as new types
of logs are added.

It is also important the computing capacity where the SIEM is


deployed, since it would not make any sense for an alert of
an improper access to a company resource to be issued late.

Disadvantages SIEM
Logging

As we mentioned in the previous unit, the logs of a device


or source must be sent or obtained in three possible ways so
that the SIEM can process them. We need to know the
following concepts:
Firewall
×
it is a part of a system or network that is designed to block
unauthorized access, while allowing authorized
communications. It is a device to allow, limit, encrypt,
decipher traffic between different devices of the computer
network.

DMZ
×
A demilitarized zone or DMZ is a secure area that is located
between the internal network of an organization and an
external network, usually on the Internet. The objective of a
DMZ is that the connections from the internal network and
the external one to the DMZ are allowed, while, in general,
the connections from the DMZ are only allowed to the
external network.

Real-time collection
×
allows to obtain logs immediately. This way of collecting is
based on event generators such as syslog, SNMP,
etc. Basically, when a device generates a log, it will be sent
to the SIEM collector immediately.

The planned harvest


×
it allows logs to be obtained so that the SIEM must ask the
devices during certain periods of time about the logs. There
are many ways to obtain the logs, for example, exchange
information by FTP, consulting via HTTP or making queries to
a database.

The installation of an agent


×
allows a device to be monitored and, depending on the
information to be collected, send the log to the SIEM. This
type of collection also allows the logs to be sent in real time.

III. Network diagram of the


organization
Imagine the following network map of an organization and
the need to monitor the traffic of data that goes through the
firewall, as well as all the logs that are produced in a server
with Windows system.

There is also another firewall that delimits the DMZ


(demilitarized zone) to control access from the office
network.
Under the above situation we need the SIEM to be able to
collect all the logs of the two Cisco firewalls and the Windows
SERVER 2008 R2 server together with the logs of the IIS 7.0
web server.
From the Microsoft Windows 2008 R2 operating system we
will obtain the failed authentications of the users that are
authenticated against the server, the user accounts that have
been created, those that have been deleted and the
successful authentications.
From the IIS web server ( Internet Information Server ) 7.0
we will obtain all the requests from users to the corporate
website.
From the firewall we can get all the traffic allowed and all the
traffic denied.
As can be seen, there is a great heterogeneity of types of
devices, so it is very important to know how sources can be
collected to be sent to the SIEM.

IV. Windows operating system


logs
Let's start with the Windows Server 2008 R2 operating
system. Our goal is to know how to configure or what should
be installed in order to obtain logs regarding the
authentications of users, that is, if a user enters the wrong
password many times, the user is blocked and can not access
the system.
Searching the Administrator Tool in Windows 2008
Windows does not generate a log itself, but it has an
application that allows you to see the events that occurred in
the system:

Click on "Start"; then in "Administrative Tools":

Search for Event Viewer in Windows 2008


Then we click on Event Viewer or " Event Viewer ":

Classification of events in Windows 2008


Windows operating systems classify events into three main
groups: Application, Security and System.
Properties of event 4725 in Windows 2008
Our goal is to identify the blocking of the user account due
to incorrect authentications. We know that this event occurs
in the Security group and has Event ID 4725.

With this type of event we can identify the possible attempts


that occur on a user's account. If more than three failed
authentications occur, the user account would be blocked. If
this behavior occurs many times, over a short period of time,
over several user accounts, it would indicate a brute-force
attack on the accounts of the users.

Event logs 4624 in Windows 2008


Next, we will identify the logins that occur on the server or
successful authentications. It can be identified with Event ID
4624.

The Event ID and the date it occurs can be identified.

Satisfactory authentications in terms of security do not


indicate anything, but, if we extrapolate it to satisfactory
authentications outside working hours, we could identify a
suspicious behavior.

Properties of event 4720 in Windows 2008


Other Event IDs that we must monitor are 4720, which
indicates the creation of a user account, and Event ID 4726,
which shows that the account has been deleted. These
events may be normal during working hours, but outside of
that schedule they may identify suspicious behavior.

Properties of event 4726 in Windows 2008


Once the Event IDs that we want to monitor have been
identified, we must install an agent in the server that sends
all the events of the security category to the SIEM collector,
and then, in the SIEM console, we can view it.

Do not confuse the term event (log normalization) with


events or Windows Event ID.

Windows does not leave any log or file where we can


examine what is happening. As we have seen at the
beginning, there is a tool that is the event viewer. In order to
send all these events to SIEM, we need to install an agent on
the server itself.

The majority of SIEM has its own agents, allowing logs to


be obtained in real time. In our case, we will use NXLOG as
an agent to obtain Windows security events and send them
to the SIEM collector.

NXLOG
NXLOG can be installed as an agent on Windows, Linux and
Android systems. It also has SSL to send events safely.

In order to send the logs to the collector, we need to


download and install NXLOG from their website http://
nxlog.org/ .
Once installed, we have to open the configuration file that
is located at:

C: \ Program Files (x86) \ nxlog \ conf \ nxlog.conf

Configuration file
We edit the configuration file to enable the Windows events
module and the collector IP together with the port.
## Please set the ROOT to the folder your nxlog was
installed into,

## otherwise it will not start.

#define ROOT C: \ Program Files \ nxlog

define ROOT C: \ Program Files (x86) \ nxlog

Moduledir% ROOT% \ modules

CacheDir% ROOT% \ data

Pidfile% ROOT% \ data \ nxlog.pid

SpoolDir% ROOT% \ data

LogFile% ROOT% \ data \ nxlog.log

<Extension json>

Module xm_json

</ Extension>

# Nxlog internal logs

<Input internal>

Module im_internal

Exec $ EventReceivedTime = integer ($


EventReceivedTime) / 1000000; to_json ();
</ Input>

# Windows Event Log

<Input eventlog>

# Uncomment im_msvistalog for Windows Vista / 2008 and


later

Module im_msvistalog

# Uncomment im_mseventlog for Windows XP / 2000/2003

# Module im_mseventlog

Exec $ EventReceivedTime = integer ($


EventReceivedTime) / 1000000; to_json ();

</ Input>

<Output out>

Module om_tcp

Host 192.168.1.126

Port 3515

</ Output>
<Route 1>

Path internal, eventlog => out

</ Route>

How to save the file


We save the configuration file and restart the service:

V. Logs of the IIS web server


( Internet Information Server )
The functionality of this application is to provide web pages. Our intention is to obtain all the
requests made by a client through an Internet browser or browser (Firefox, Chrome, etc.). In this
way, we can monitor when an attack on the web page is occurring. Keep in mind that the IIS
server is listening on port 80 of the server.
The first thing we have to do is make sure that the log generation is activated in the IIS and
check where those logs are stored that we will have to monitor and send to the collector through
the installation of an agent, the NXLOG.

IIS sample to view the record


To see the options that are offered to us, related to the
generation of logs, we must go to the Internet Information
Services (IIS) Administrator , in the "Administrative Tools",
just like we did before with Windows events. In the tree on
the left, we select the one we want to configure in the "Sites"
folder. In the panel on the right, different icons will be
loaded, double click on “Registration".

In this window, we have to make sure that the registration is


enabled.

How to enable a record


If all the text boxes are not editable, it means that the record
is not enabled. To enable it, we must click on the "Enable"
link found in the right-hand menu “Actions".

Configuration of the registry format


Now we must configure the format and the path where the
log files will be written and the periodicity of the file
rotation. The rotation indicates how long you have to be
keeping records on the same log file. The best rotation option
is 24 hours and have a log file for each day.

Registration fields
The format is specified in the first panel "Record file", where
we can see a dropdown with the different types of
format. We choose W3C and click on the adjacent button
"Selected fields". In the new window we see a list with the
fields that will be registered in the log. We must search and
activate the fields Bytes sent (sc-bytes), Bytes received (cs-
bytes) and Protocol version (cs-version), since they are fields
used in the source that will read the logs of this device. After
accepting, this window will close.

NXLOG agent
Back to the previous screen, a little further down we can
specify where we want to store the log files. By default, they
are saved in % SystemDrive% \ inetpub \ logs \
LogFiles . This route is needed for the configuration of the
NXLOG agent.

In order to send the logs to the collector, we need to


download and install NXLOG from their website http://
nxlog.org/ .
Once installed, we have to open the configuration file that
is located at:

C: \ Program Files (x86) \ nxlog \ conf \ nxlog.conf


Configuration file
We edit the configuration file to enable the IIS module,
indicate where the IIS log directory is and the IP together
with the collector port.

# Create the parse rule for IIS logs. You can copy these
from the header of the IIS log file.

<Extension w3c>

Module xm_csv

Fields $ date, $ time, $ s-ip, $ cs-method, $ cs-uri-stem, $


cs-uri-query, $ s-port, $ cs-username, $ c-ip, $ csUser-Agent
, $ cs-Referer, $ sc-status, $ sc-substatus, $ sc-win32-status,
$ time-taken

FieldTypes string, string, string, string, string, integer,


string, string, string, string, integer, integer, integer, integer

Delimiter ''

QuoteChar '"'

EscapeControl FALSE

UndefValue -

</ Extension>

# Convert the IIS logs to JSON and use the original event
time
<Input IIS_Site1>

Module im_file

File "C: \\ inetpub \ logs \\ LogFiles \\ W3SVC1 \\ u_ex *"

SavePos TRUE

Exec if $ raw_event = ~ / ^ # / drop (); \

else \

{\

w3c-> parse_csv (); \

$ SourceName = "IIS"; \

$ Message = to_json (); \

</ Input>

<Route IIS>

Path IIS_Site1 => out

</ Route>

How to save the file

Example of IIS log in the \ inetpub \ logs \ LogFiles folder


In this situation there will be two logs, those that remain on
the server itself and those that are sent to the SIEM.
Cisco firewall logs
Our intention is to obtain all the generated by the two Cisco
firewall to know what happens. The firewall is the most
important element of the organization, since it has direct
data entry from the Internet. If an attacker gained control of
any equipment that was in the organization, everything
would be evidenced by the firewall logs.
Syslog protocol
The integration of Cisco logs is done through the syslog
protocol. Next, it is detailed how to configure the device to
activate the forwarding of the logs to the SIEM collector
through syslog. The configuration is done using ASDM. It is
the management application of the Cisco firewall / routers. In
this way, we can graphically configure the device and it is
responsible for converting it into commands to be executed
in the firewall.

To configure the forwarding of syslog using ASDM, you


must do the following:

• Go to Configuration -> Device Management -> Logging.


• In the "Logging Setup" section, check that the boxes
"Enable logging" and "Send debug messages as syslogs"
are checked, and that the "Send syslogs in EMBLEM
format" box is unchecked.
Configuration of the "Logging Filters"
In the section "Logging Filters", define the level of log for
the destination Syslog Servers . The recommended value is
Informational or higher (Debugging).
Syslog Servers
In Device Management -> Logging -> Syslog Servers add a
new syslog server with the parameters:
Interface: router interface from which the SIEM collector is
reached.
IP Address: IP address of the SIEM collector.
Protocol: here the protocol is defined by which the logs will
be sent. We can choose between UDP and TCP, each with its
pros and cons.
On the one hand, we have the TCP protocol, which gives us
the assurance that all the logs will reach the
collector. However, the use of this protocol opens and closes
a connection for each log it sends, which can cause
saturation in the collector, leaving no free connections and,
therefore, denying the service.
The other option is to use the UDP protocol. The advantage
that it presents is that, not needing a connection, we do not
find the saturation problem of the collector. On the contrary,
this protocol does not ensure that the logs reach their
destination, at the risk of losing information.
Port: listening port of the SIEM collector. This parameter will
depend on the configuration of the SIEM.
Do not check the "Log messages in Cisco EMBLEM format"
box.

Display of the "Syslog Setup"


In the section "Syslog setup", disable the sending
of timestamps in the syslog messages. The format of the logs
will be as follows:
Monitor
Once we have the device configured to send logs to the
SIEM collector, our intention is to be able to monitor the
firewall to identify the following situations:

• All accepted traffic and all denied traffic.


• Scanning machines: it allows to know how many
machines are in the network of the organization.
• Port Scan: allows to know the ports that are open in the
machine. This situation may also serve to know the type
of service and even the version. For example, if port 80
were open, it would indicate that there is a web server
running, or if port 21 were open, it would mean that
there is an ftp server.
Being able to identify these situations in real time can be
fundamental, since this behavior indicates that you are trying
to identify the machines and ports that are open in the
organization.

This identification phase is used by the attackers and is


known as Information Gathering.

Types of security events:


added or correlated

introduction
Security events are produced by the devices or sources of
an organization through the logs they produce. It is very
important to know:
Scanning ports
×
It is done through a program, checking the status of the
ports of a machine connected to a communications
network. Detects if a port is open, closed or protected by a
firewall. It is used to detect what common services the
machine is offering and possible security vulnerabilities
according to the open ports. It can also detect the operating
system that is running the machine according to the ports it
has open. It is used by system administrators to analyze
possible security problems, but it is also used by malicious
users who try to compromise the security of the machine or
the network. There are several programs to scan ports over
the network. One of the best known is Nmap, available for
both Linux and Windows.
Scanning machines
×
The scanning of machines is also done with a program, but
only detects whether or not there is a machine behind an IP.
Brute force attack
×
In cryptography, brute force attack is the way to recover a
key by trying all possible combinations until you find the one
that allows access.
Attack by dictionary
×
A dictionary attack is a cracking method that consists of
trying to find a password by trying all the words in the
dictionary. This type of attack is usually more efficient than a
brute force attack, since many users often use an existing
word in their language as a password to make the password
easy to remember, which is not a recommended practice.

Dictionary attacks are unlikely to succeed with systems


that employ strong passwords with uppercase and lowercase
letters mixed with numbers (alphanumeric) and with any
other type of symbols. However, for most users remember
such complex passwords is complicated. There are variants
that also check some of the typical substitutions (certain
letters by numbers, exchange of two letters, abbreviations),
as well as different combinations of uppercase and
lowercase.
Terminal Services
×
They are a component of Windows operating systems that
allow a user to access applications and data stored on
another computer through network access.

III. Description of the network


environment
In the previous unit we had exposed the ways to obtain
logs from the different sources of the organization. Now that
we have the standardized logs in our SIEM thanks to the
collector and the agent that we installed on the server, we
can exploit these events to provide them with intelligence.

Currently, we have three sources of logs:

• Firewall
• Windows Server: Windows events.
• Windows Server: IIS web server.

Description of the network environment

Remote or local events


The firewall monitors all incoming and outgoing
connections to the organization. We know that you are
allowed to access the web server from the Internet. We also
know that users of the organization can authenticate on the
Windows server, so there will be several ports enabled to
allow communication and for the server of web pages that
are installed on the Windows server.

Windows events can occur remotely or locally. Remote


means that a user from outside the network of the
organization or from the network of the organization could
generate that event. However, there are events that only
occur locally on the server.

• Event ID 4624: when a user authenticates correctly ->


Remote and Local.
• Event ID 4625: when a user does not authenticate
correctly -> Remote and Local.
• Event ID 4725: an account has been blocked by many
attempts -> Remote and Local
• Event ID 4726: an account has been deleted -> Local.
• Event ID 4720: an account has been created -> Local.
Microsoft uses port 445 to manage all successful
authentications (Event ID 4624); that's why it can produce
remote authentications.

We also know that the Windows server has a web page


server, the IIS ( Internet Information Server ) and is listening
on port 80.

IV. Aggregate events


Aggregate events are those events from the same source
that are added to a counter, having a temporary start and a
temporary end marked by the maximum number of events
allowed in the counter.
Let's imagine that we have a first Windows event, the
Event ID 4726 with a timestamp of 20:01:40 where it fulfills
the following format hours: minutes: seconds.

Next, we receive another event with the same Event ID


with a time stamp at 21:05:21.

Description of reception of events per hour

Accountant in two events


Currently, we have two same events:

• At 20:01:40.
• At 21:05:21.
Seeing the definition of aggregate events, we can say that
the counter would be in two events.

The event counter is usually set with a higher number of


events in order to eliminate false positives; in this way, the
SIEM operator is prevented from working on false positives.

Another case would be that we were outside working hours


and knowing that there is no one in the organization's
facilities. In this case, with the counter = 1 or a simple event
it would suffice to raise the alarm voice.

Knowing that the working hours of the organization ends at


7:00 pm and that the system administrators are also within
that schedule, we can confirm that these two events are
suspicious, since the elimination of user accounts outside
working hours may indicate that there is someone who is
accessing the server and deleting user accounts.

With this context, an alert could be generated to warn an


operator and review why these events have occurred.
Description of receipt of events by accountant
Imagine now that we receive the Event ID 4726 and then the
Event ID 4720. Both are received one after the other and
both after work hours.

Element
Again we are in a suspicious situation, since first an
account has been deleted and then another has been
created, both outside of working hours.

This type of events generates aggregate alerts, since both


events are from the same source, in this case, Windows
events.

Let's continue with more possible aggregate events. Imagine


that we received a blocked account event with Event ID
4725. We examined all the information that this event can
give us at the following URL:
http://eventopedia.cloudapp.net/EventDetails.aspx?
id=505b3de8-87e1-4eb4-9ae6-53f5bf826dc1 .

Information that the event gives us with ID 4725


As we can see, it gives us the name of the team where the
event was held, apart from the account that has been
blocked.

Our intention is to discover if someone from within the


organization that knows the user accounts of their colleagues
is testing passwords to access their credentials to their
folders. This technique is known as brute force attack.
Description of reception of events with username
We received an event in the SIEM with Event ID 4725 that is
produced from the Users01 team and at the time of the first
event, we received a second event with the same Event ID
4725, but that affects another account different from the
first, and from the same Team Users01.

Event Timeline ID 4624


In this context, an alert should be issued informing that
users01 have blocked two different accounts, when normally
it is only possible to block the account of the user who
normally uses the computer. The fact that it occurred from
the Users01 computer indicates that this type of event has
occurred remotely.

Again, we return to another situation of adding events, but


again outside of working hours. A Windows event with ID
4624 is received indicating that a successful authentication
has occurred, but, as it is received outside of business hours,
it indicates that someone has authenticated on the system.

Demonstration of a demilitarized zone (DMZ)


Now let's analyze the firewall that delimits the DMZ or
demilitarized zone. Our intention is to know what traffic goes
through the DMZ firewall and to be able to issue alerts based
on security events.
We want to identify two situations that can occur in the
organization, the scanning of machines or the scanning of
ports.

Machine scanning occurs when a user uses an application


that tries to discover how many machines or computers are
in their network. For this, the application has an IP address
(the IP address assigned to the user who connects to the
organization to be able to work). Knowing that IP address,
you can know the range of IP you should test to ask if there
are active computers.

Demonstration of connection of IPs with Destination IP


We know that the same source IP address (that of the
internal attacker's application) will ask the rest of the
organization's destination IP.

Demonstration of connection of IPs with specific Destination Port


Knowing this way of functioning, we can analyze the logs of
the traffic that goes through the DMZ firewall and identify
this behavior.

Port scanning occurs from the same source IP address to


the same destination IP address. What does change for each
request is the destination port.

In both situations, both for scanning ports and machines, you


have to keep a counter associated with the source IP. In such
a way that if a source IP makes more than "x" events
towards different "x" IPs, a scan of machines is taking
place. In the case of a port scan, two counters must be
maintained, one for IP source and another for IP destination-
destination port. In this way, false positives are avoided by
differentiating what is legitimate traffic from what is not.

V. Correlated Events

Remote Desktop Connection


We know that the organization is allowed from the FW that is
connected to the Internet, the remote desktop or Terminal
Services, that an external worker can connect to the
Windows Server through their credentials.

Demonstration of the IP route Origin to a port server


In this case, the port used by the remote desktop is 3389
TCP, so we must look for an event where the originating IP of
the accessing user is seen, the destination IP of the Windows
server and that goes to port 3389.

Demonstration of search of events with three sources


We will also look for Windows events, in this case, all those
we have learned at the beginning of the agenda, so we will
use a total of three sources: Windows Events, Firewall Router
and DMZ Firewall.
Looking at the previous graph, we can affirm that a
correlated event is the correlation of events from different
sources. These events generate correlated alerts.

Requests and events


Suppose an attacker obtains the required metadata from a
PDF file to obtain a user name for the organization. In turn, it
obtains the IP where the workers of the organization access
the remote desktop and initiates a brute force process to
obtain the user's password that was found in the PDF
file. This situation causes:

• Multitude of requests accepted in the firewall with


destination port 3389.
• Multitude of Windows events with Event ID 4625.
• A Windows event with Event ID 4624, by getting
a valid password .
When the three conditions described above are met, an
alarm should be generated, since everything indicates that a
user external to the organization has obtained access.
Correlation of IP addresses
Now we are going to correlate the IPs that request the
corporate website of the IIS Web server company with a list
of malicious IPs. This is the real log of an organization, where
it is seen that a user is making suspicious requests through
POST to a powershell resource .
In the header of the file come the fields separated by a
blank space and we are interested in the IP of the user who
is making the requests.

There are two types of IP, those of IPv4 format, which are
only numeric and are separated by a period. -> 10.1.2.21,
and IPs of IPv6 format, such as fe80 :: 3383 :: 9955 ::
cd35 ::.

Correlation of IPv4 with IIS logs


We are going to correlate the IPv4 of the IIS log with
malicious IP lists:

The list of malicious IPs is downloadable


from www.badips.com , where everyone who has had a
security incident with one of the IPs can share it with the rest
of the world URL. Once the event is corrected and if it
coincides with any of the IP, an alert could be launched.

Search for events: querys

introduction
The majority of the SIEM, through its web console, allow
searches of standardized logs of all the devices that are
transmitting to the SIEM collector. Once the logs are stored
and normalized, they must be exploited to obtain
information. With this information, the dashboards, alerts
and searches are made.

The dashboards will allow us, through graphics, to know


the real state in which the organization is.

The alerts allow reporting of the event thanks to the


intelligence of the aggregated or correlated events.

Searches or queries will help us find an event among all the


logs of the devices. These searches are carried out through
the Structured Query Language (SQL) or through the
proprietary language of the SIEM itself.

The SQL language allows the search of events as if they


were a database. It is a declarative language: you just have
to indicate what you want to do.

SQL is a language very similar to natural


language; specifically, it resembles English, and is very
expressive. For these reasons, and as a standard language,
SQL is a language with which you can carry out the search
for SIEM events such as Arcsight, Qradar and Bitacora.

III. Search for an event


To search for an event from a log source, it would be done
as follows:

Example of how to search for an event from a log source


column_name_to_select = the log consists of several fields,
date, IP, and so on.

col_removed = the field can be renamed when it is


returned in the query.

Table_to_consult = in this case, it would be the source of


SIEM logs to consult.

The AS option allows us to rename the columns (fields of


the log) that we want to select or the tables that we want to
consult, which, in this case, is only one. In other words, it
allows us to define aliases. Notice that the keyword AS is
optional.

Example of a log called “log_iis"


Imagine that this is a log of an Internet Information
Server and the name of the table is log_iis.

How to make a query about the SIEM

SELECT *
FROM log_iis;

Demonstration of what is returned when the SIEM is consulted

How to make the SIEM event_date and client_ip query

Demonstration of what is returned when the SIEM event_date and client_ip


is consulted
If we only wanted to see the fields event_date and client_ip,
the query would be carried out on the SIEM console:
SELECT event_date, client_ip
FROM log_iis;
How to query a specific column
With the SELECT FROM statement, we can select columns
from a table, but in order to select rows from a table, you
must add the WHERE clause. The format is as follows:

IV. Search for an event with


WHERE

How to make the query with the clause WHERE


The WHERE clause allows us to obtain the rows that meet the
condition specified in the query:
SELECT client_ip FROM log_iis
WHERE event_date=’08/05/2015 0:12’;
Answer the query with WHERE

SQL Operators
To define the conditions in the WHERE clause, we can use
any of the operators that SQL has:

v. Search with DISTINCT

How to consult with the clause DISTINCT


If we want that in a query only the events without repeating
appear, it is necessary to put the keyword DISTINCT
immediately after the SELECT.

SELECT DISTINCT column_name_to_select_from


FROM table_to_select
(WHERE conditions);

Example of IP search with DISTINCT


For example, if we wanted to see which IP of the browsers
have connected to the organization's website, we could do:
SELECT DISTINCT client_ip FROM log_iis

Result of the IP search with DISTINCT

Search with aggregation functions

Added SQL functions


SQL offers us the following aggregation functions to perform
various operations on the events:

In general, aggregation functions are applied to a column,


except for the COUNT aggregation function, which normally
applies to all columns or log sources. Therefore, COUNT (*)
will count all rows in the table or tables that meet the
conditions. If COUNT (distinct column) were used, only the
values that were not null and repeated would be counted,
and if COUNT (column) were used, only the values that were
not null would be counted.
Example using COUNT function

COUNT function response

VII. Search with subquery


Search example with WHERE subquery
A subquery is a query included within the WHERE
clause. Sometimes, to express certain conditions, there is no
other way than to obtain the value we seek as a result of a
consultation.
Search response with the highest record of http_status_code

With the previous sentence we will search in the events


which was the one that registered the highest
http_status_code. When you find it, it will return all the data
associated with that record:

VIII. Search with BETWEEN

Example search with BETWEEN query


To express a condition that wants to find a value between
specific limits, we can use BETWEEN.

Search with BETWEEN query of a specific time

Answer of the query with BETWEEN

IX. Search with IN

Example search with predicate IN / NOT IN


To check if a value matches the elements of a list, we will
use IN, and to see if it does not match, NOT IN:

Search with IN predicate

Search response with IN predicate

X. Search with LIKE


To check if a column of type string contains a specific
character, we can use LIKE:

XI. Sort events with ORDER


If you want the data to appear in a certain order when you
make a query, you must use the ORDER BY clause in the
SELECT statement that has the following format:
If the order is not specified, it will be ascending by default. To
make it scenic, it must be specified by the keyword DESC.

Dashboards

introduction
The scorecards are a fundamental tool in the management
of a SIEM. Its main mission is the representation of
information, which is monitored in the devices or sources of
the organization. Another fundamental characteristic of
dashboards is to know if there are problems when collecting
from some source of information, since no data appears in
the indicators of the tables when there is a problem.

DNS packets over IP

Number of phising per day


The creation of the different control panels will be
motivated by the compliance type or standard to be met.
It is essential to emphasize that you can not represent the
information of a specific moment of time, but it will show the
information of a period of time together with a rate of refresh
of the scorecards. These parameters will be imposed by the
type of standard that must be met.

III. Types of scorecard


As we mentioned before, the scorecards should be created
according to the type of standard that must be met. Let's
review the different types of graphics that exist today:

Traffic light
This type of graph tells us how a log source is or a
compliance requirement.

Columns
In this example it can be seen that there is always an
indicator of the time period that is displayed. In this case the
IP of the monitored device is represented on the 'x' axis,
while the number of requests is shown on the 'y' axis, all
applied to the assets categorized as a web server.

Bars
The bars represent the same information as the columns,
exchanging the axes. Now the sources of information are on
the 'y' axis, while the data to be counted is on the 'x' axis.

Lines
The lines are oriented to represent comparatively events
during periods of time. As can be seen, on the 'x' axis the
specific days of the time period are represented, while on the
'y' axis the total number of events that have reached a given
day is displayed.
Areas
The areas are also intended to offer a temporary
representation of the events that have taken place. In this
case we observe how along the 'x' axis we can see the days
of a month, while on the 'y' axis the number of events is
shown. In this way, the number of events in a set period of
time is visually better represented.

Stacked bars
The stacked bars are also used to represent events that have
occurred over a period of time. The bars have identified in
themselves the number of events, which facilitates their
understanding.

2D Quesitos
The 2D quesitos are used to make event totals, that is, in the
example shown, the number of GET and POST requests that
have occurred in a web server during a period of time is
represented. Of the total of requests in that period of time,
the percentages are extracted and represented.

Donut

3D Donut
Both donut, both 3d and 2d, represent the total number of
events that occurred in a period of time. The functionality is
the same as when using 2D cheeses.

Data table
The data tables are used to represent the events as such
along with an associated counter. In this case, you can see
the top of pages visited by an organization.
IV. Command Frames for PCI DSS

PCI DSS is a data security standard for the payment card


industry and establishes the requirements that we will see
below.

Requirement 1
Install and maintain a configuration of firewalls to protect
the data of cardholders.
Firewalls are devices that control computerized traffic
between networks (internal) and unreliable (external)
networks of an entity, as well as the traffic of entry and exit
to more sensitive areas within the confidential internal
networks of an entity. The cardholder data environment is an
example of a more confidential area within the trusted
network of an entity.
The firewall examines all traffic on the network and blocks
transmissions that do not meet the specified security criteria.
All systems must be protected against unauthorized access
from unreliable networks, whether they enter the system
through the Internet as electronic commerce, access to the
Internet from the desktops of employees, access to electronic
mail from employees, of dedicated connections such as
connections between businesses through wireless networks
or through other sources. Frequently, some routes of
connection to and from seemingly insignificant unreliable
n e t w o r k s c a n p r o v i d e u n p r o t e c t e d a c c e s s t o ke y
systems. Firewalls are an essential protection mechanism for
any computer network.

Requirement 2
Do not use system passwords and other security
parameters provided by providers.

Malicious people (external and internal to an entity) usually


use the default passwords of the providers and other
parameters that the provider predetermine to compromise
the systems. These passwords and parameters are known
among hacker communities and are easily established
through public information.

Requirement 3
Protect the cardholder's data that was stored.

Protection methods, such as encryption, truncation,


concealment and hash function, are important components to
protect the data of cardholders. If an intruder violates other
security controls and gains access to the encrypted data,
without the proper encryption keys, they will not be able to
read or use that data. Other effective methods to protect
stored data should also be considered to mitigate potential
risks. For example, methods to minimize risk include not
storing cardholder data, unless absolutely
necessary; truncate cardholder data if the full PAN (master
account number) is not needed, and do not send the PAN
(main account number) using end-user messaging
technologies,

Requirement 4
Encrypt the transmission of cardholder data in open public
networks.
Confidential information must be encrypted during its
transmission through networks that criminals can easily
access. Improperly configured wireless networks and
vulnerabilities in heir encryption and authentication protocols
remain the goals of those who exploit these vulnerabilities for
the purposes of accessing cardholder data environments.

Requirement 5
Protect all systems against malware and update programs
or antivirus software regularly.

Malicious software, called "malware", including viruses,


worms (worm) and trojans (Trojan), enters the network
during many approved business activities, including worker
emails and Internet usage, laptops and storage devices and
exploits system vulnerabilities. Antivirus software should be
used in all systems that malware usually affects to protect
systems against current or future malware threats. You can
consider the option of including other antimalware solutions
as a complement to antivirus software; however, these
additional solutions do not replace the implementation of
antivirus software.

Requirement 6
Develop and maintain secure systems and applications.

Unscrupulous people use security vulnerabilities to gain


privileged access to systems. Many of these vulnerabilities
can be addressed through security patches provided by
providers. The entities that administer the systems must
install these patches. All systems must have the correct
software patches to prevent malicious people or malicious
software from improperly using, or putting at risk, the
cardholder's data.

Requirement 7
Restrict access to the data of the cardholder according to
the need to know that the company has.

In order to ensure that authorized personnel are the only


ones that can access important data, systems and processes
must be implemented that limit access according to the need
to know and in accordance with the responsibility of the
position.

"The need to know" is the situation in which rights are


granted to the least amount of data and privileges necessary
to perform a task.

Requirement 8
Identify and authenticate access to system components.

By assigning an exclusive ID (identification) to each person


who has access, it is guaranteed that each one will be
responsible for their actions. When this responsibility is
exercised, the measures implemented in data and critical
systems are in charge of known and authorized processes
and users and, in addition, it can be followed up.

The effectiveness of a password is determined, to a large


extent, by the design and implementation of the
authentication system, especially the frequency with which
the attacker tries to obtain the password and the security
methods to protect the passwords of users at points of
Access during transmission and storage.

Requirement 9
Track and monitor all access to network resources and
cardholder data.

The registration mechanisms and the possibility of tracking


user activities are critical for the prevention, detection or
minimization of the impact of data risks. The presence of
records in all environments allows tracking, alerts and
analysis when something does not work well. Determining
the cause of a risk is very difficult, if not impossible, without
records of system activity.

Requirement 10
Test security systems and processes regularly. Vulnerabilities
are discovered continuously by malicious people and
researchers and are introduced through new software. The
components, processes and custom software of the system
must be tested frequently to ensure that security controls
continue to reflect a dynamic environment.

Requirement 11
Maintain a policy that addresses the security of information
of all staff.

A sound security policy establishes the degree of security


for the entire entity and informs staff what is expected of
them. All personnel must be aware of the confidentiality of
the data and their responsibility to protect them. For the
purposes of Requirement 12, the term "personal" refers to
full-time and part-time employees, temporary employees,
contractors and consultants who "reside" at the entity's
facilities or who have access to the environment. data of the
cardholder.

Requirement Status
Finally, to see if all PCI DSS requirements are met, a
dashboard can be designed where all the traffic lights
associated with the requirements appear.
V. Creating a control panel in
Splunk

Insert credentials to Splunk


Once we have Splunk installed, we insert the access URL to
SIEM and authenticate ourselves.

Go to manager
If it is the first time we enter, it will ask us to create a new
password. Once inside, we go to the "Manager" section.

User interface
Next, we click on the user interface section:

Go to "View" and click on "add new"

Fill in the form with the destination where the control panel will be housed
A drop-down will open where we can select where the
scorecard will be lodged, the name of the scorecard and the
configuration of it. In this case with Splunk the control panels
are built with XML files.

XML file: example


<form autorun = "true">

<label> Simple XML </ label>

<description> A Simple XML dashboard created in


Splunk Web </ description>

<fieldset>

<input type = "dropdown" token = "headcount">

<label> Number of results </ label>

<choice value = "5"> 5 </ choice>

<choice value = "10"> 10 </ choice>

<choice value = "15"> 15 </ choice>

<default> 5 </ default>

</ input>

<input type = "radio" token = "sourcetype">

<label> Source Types </ label>

<choice value = "*"> All </ choice>

<populatingSearch fieldForValue = "sourcetype"


fieldForLabel = "sourcetype">

<! [CDATA [index = _internal | head 1000 | top


sourcetype]]>
</ populatingSearch>

<default> * </ default>

</ input>

</ fieldset>

<row>

<table>

<searchTemplate> index = _internal sourcetype =


$ sourcetype $ | head $ headcount $ </ searchTemplate>

<title> index = _internal sourcetype = $


sourcetype $ | head $ headcount $ </ title>

<option name = "showPager"> true </ option>

<earliestTime />

<latestTime> </ latestTime>

</ table>

</ row>

<row>

<chart>

<searchString> index = _internal | stats count by


sourcetype </ searchString>

<title> Count by sourcetype </ title>

<earliestTime> </ earliestTime>

<latestTime> </ latestTime>

<option name = "charting.chart"> foot </ option>


</ chart>

</ row>

Dashboard with XML

Alerts and tickets

introduction
Alerts are defined to meet a series of objectives that allow
establishing their function within a security monitoring
system. Objectives of alerts:

• Warn quickly and clearly of possible attacks that are


being made against the computer system.
• Detect anomalous behavior, which may indicate that the
security of a system is being violated, and give sufficient
information to help in the investigation of the anomaly.
• Identify violations of the internal security policies of the
organization.
• Help in the optimization of the devices that make up the
computer network.
• Detect loss of availability of important devices, especially
other security devices in the network.
Currently, all SIEMs have an alert engine that analyzes all
events. Depending on the context in which the events occur,
the counter will increase. Previously, we saw the events that
could be aggregate or correlated. These events will be used
to generate these alerts, again added or correlated.

To be able to show the alerts engine of a SIEM we are


going to use SEC (Simple Event Correlator). It is an engine of
the alerts made in the interpreted PERL language that is
reading the log files, normalizing them and generating alerts.

II. goals
This unit aims to teach in a practical way what is an alert,
how an alert engine works and how alerts can be
implemented through SEC. The main characteristics of a
ticketing system such as Osticket are also shown.

III. Simple Event Correlator


(SEC)
As we mentioned earlier, the SEC is an open source
application made in PERL. It can be downloaded from
here: http://simple-evcorr.sourceforge.net/ .
Let's start with a basic example to understand the
functioning of SEC.
Format of the log record in Linux
Suppose we want to control the root login. Every time
there is a root login on a Linux system, a log of the following
format is produced:

Format of the configuration file


We create a configuration file (called root.conf) with the
following content:

The previous capture is an example of SEC.

The first line, type, defines the type of rule, which in this
case is "Single", which indicates that we want to analyze
unique occurrences of an event or a single source.
The second line, ptype, defines how we will search for events
or patterns. In this case, "RegExp" indicates that Perl regular
expressions will be used.
The third line, pattern, is a regular expression that will match
the log messages when there is a root login. The date and
time are grouped, the source IP and the destination IP.
The next line, desc, is a description. This description explicitly
identifies each occurrence.
The last line, action, is the action that will be taken.
In this case the complete message (identified as $ 0) is
added to a context called rootssh_ $ 2, where $ 2 will be the
originating IP. Finally, an email with the content of the
context is sent.

Now, suppose there is a task that every day makes a


connection as root from the IP 192.168.55.89, which
executes a process and we do not want to receive an email
every time this known connection is made.

What is needed is to add the following rule to the


configuration file before the previous rule.

Following the examples, let's look at an SEC rule to detect


possible brute-force attacks on ssh connections.

First rule

Second rule
The first rule is another of the types available in SEC:
SingleWithThershold.
Two more options are added to those that Single had, which
are window and thresh.
Window is the span during which this rule must be
monitoring and thresh is the threshold or counter for the
number of events that need to appear within the previous
lapse to trigger the action of this rule.
In this case, the rule will execute the action if there are 5
login events failed within 60 sec.
The context option is also used, which indicates that the rule
is triggered only if the context does not exist.
The action line creates the context ($ 5 represents the source
IP) that expires in 60 sec.
Once the context has expired, an email with a description
and the log lines that were detected is sent.
The second rule adds additional events to the context and
extends its life time by 30 seconds, as long as it exists. If it
does not exist, it does not do anything.
The creation of these contexts, which are created in a
dynamic way, is one of the main characteristics of SEC.

For example, a printer with a locked paper can emit a large


number of log messages and would be a nuisance if an email
is generated for each log message.
In SEC, a context could be created such that "a paper
event has been seen and a mail has already been sent" so
that the rule could in the future suppress the sending of the
mails if the context exists.

SEC includes other types of rules. The description obtained


directly from the manual is:

• SingleWithScript search for matches in the input events


and execute an action depending on the status of an
external script
• SingleWithSuppress look for matches in the input events
and execute an action immediately, but ignore the
following matches during the next t sec.
• Pair look for matches in the input events, execute an
action immediately and ignore the following matches,
until there is a match with another different event. In
the coincidence of the second event, execute another
action.
• PairWithWindow look for matches in the input events
and wait t sec for another event to appear. If this event
is not observed within a certain interval, an action is
executed and, if the event appears, another action is
executed.
• SingleWith2Thresholds look for matches in the input
events during t1 sec and if a given threshold is
exceeded, execute an action. Then start counting the
matches again and if your number during t2 sec is less
than a second threshold, execute another action.
• Calendar executes an action at specific times.

IV. Implementation in Sec of


Firewall Alerts

Description of network scan type attack

Log Cisco AS5520


We have this example of a Cisco ASA5520 log:
May 27 01:24:45 10.30.15.14% FWSM-4-106023: Deny
tcp src EXCHANGE: INFTORR057SM_int1 / 5533 dst outside:
62.17.216.46/80 by access-group "EXCHANGE_access_in

Implemented rule

Description of Port Scanning type attack

Implemented rule

V. Ticketing systems
A ticketing system is a software package that manages and
maintains lists of tickets, while a ticket is a record contained
in the ticketing system that houses information about the
alert made by the SIEM. Typically, the ticket all the necessary
data to be able to manage it.

Probably this ticketing system is the most popular among


open and free ticketing systems. It is a trustworthy and
globally used incident management system.

It is able to manage tickets created through email, web


forms, phone calls ... easily and simply, so it is considered a
very complete tool.

Functions
Customizable fields
personalization of the data collected from the users when
presenting a ticket to help go straight to the problem.

You can create custom lists of data to add to each ticket or


specific help topics for customers to choose when creating a
ticket. Custom fields / Forms / Lists, can be added to each
ticket created or only displayed when a specific help topic is
chosen. They can be configured as best suits the business
needs.

HTML rich text


Rich text or HTML email is supported and allows you to
write rich text in the staff responses and internal notes
published in the ticket thread.

The automatic response templates also contain the rich


text that also allows the addition of logos. Images, as well as
videos, can be added to a ticket at the moment of answering.

Filters in tickets
allows you to apply conditional rules to incoming tickets to
assign them to the appropriate departments or staff
members.

Set an unlimited number of filters for various purposes,


including email addresses, APIs or web forms. Establish
actions such as rejecting tickets, auto assigning to certain
staff / departments or even sending an integrated response.

Help topics
Help topics configurable for tickets. Consultation routes
without exposing departments or internal priorities.

The tickets can be optimized to enable a faster response by


being directed to the predetermined departments. In
combination with custom forms, you can design a form for a
series of specific help topics to gather additional information
for specific requests.

Collision prevention agent


ticket blocking mechanism to allow staff to block tickets
during the response and avoid contradictory or double
responses.

It prevents multiple agents from responding to the same


ticket. It establishes the amount of time that blocking is
maintained on a ticket. When a ticket is blocked, the rest of
the staff can not respond to the ticket until the blocking
expires.

Assign and transfer


Transfer tickets between departments to make sure it is
being handled by the right staff. Assign tickets to a staff
member or a team.

The tickets can be autoassigned by the help topics or


departments when they arrive, but what do they have to be
reassigned? No problem. You can reassign the tickets for the
staff or a team of personnel or transfer all to another
different department. Transfers and allocation notes are
recorded as internal notes in the ticket thread so that you
can track where the ticket has been routed to for processing.

Autorresponder
sending a configurable automatic response when a new
ticket is created or a message is received.
Automatic responses can be created to extract information
from the ticket to personalize the email. Osticket supports
placeholder variables such as% {ticket.name.first} that will
become the user's first name when the autoresponder is
sent. Automatic responses can be edited and customized for
each department and associated with help topics.

Internal notes
add internal notes to the tickets for the staff. The activity
logs allow you to see the events or actions that have been
taken, when they were carried out and by whom.

Service level agreements


SLA plans allow you to track tickets and their due dates,
receive delayed alerts and announcements about forgotten
due dates, and priority tickets, create an unlimited number of
SLA plans and assign them to help topics, the departments or
the filters of the tickets.

Customer portal
All requests for help and answers are filed online. The user
can log in using the email and the ticket ID. No user account
or registration is required to send a ticket.

The disadvantage of this system is the support provided to


the user. Being a completely free system, technical support is
much scarcer than with any other payment system and,
therefore, it is not always possible to cover a failure in the
tool at the necessary time. Occasionally, it would be
necessary to wait for a complete update of the system.

Reporting: reports
introduction
The reports or reports are the way to show the state in
which the network of an organization is located. Unlike the
dashboards, it is not necessary to enter the SIEM console to
check the status. Reports can be scheduled to be executed
and sent to the recipient for viewing.

Currently, all SIEMs have predefined reports based on the


standard or compliance to be met. In this unit we will see
how you can create a report with the Crystal Reports
tool. This tool would allow us to connect to the SIEM
database engine and make customized reports.

Crystal Reports is a powerful yet easy-to-use tool for


designing and generating reports from data stored in a
database or other information source. It is, by far, the most
popular tool in its category thanks to the reporting
capabilities within custom applications. This last fact has
undoubtedly contributed much that, for over ten years,
Crystal Reports has been incorporated as standard in the
Microsoft development tools (Visual Basic and then Visual
Studio).

II. goals
This unit aims to teach in a practical way how to attack a
database engine of a SIEM through Crystal Report. In this
way you can generate fully customized reports, regardless of
those that are established in the SIEM.

III. Reports with Crystal


Reports
We will create a report with the following structure: a main
report will serve as a container, where visual aspects of the
header and footer of the report and the pages will be
defined. In it, no queries will be made to the data source, but
it will host the subreports where we will perform the queries
to the SIEM database engine.

It is necessary to have installed the report generation tool


Crystal Reports 2008 in the work team where we are going to
create the reports.

Creation of the main report


Creation of a new report
We opened the Crystal Reports report creation tool and
created a new report through the option File → New →
Standard report of the menu.

Choice of database for the report


The wizard will open a window where the connection to the
data source to be used should be created. To do this, we
display the option Create new connection and choose the
corresponding data source for the connection we want to
use. In our case, we will make a connection to the MySQL
database using the JDBC driver. The type of database may
vary depending on the SIEM
Enter connection data
When you open this option, a window will appear to enter
the connection data:

• Connection URL: jdbc: mysql: // hostname_mysql: port,


where hostname_mysql will be the hostname, fqdn or ip
where the SIEM database is located and port the port
where it listens.
• Database class name: com.mysql.jdbc.Driver.
Enter password of the connection

Demonstration of the creation of the connection


When choosing Finish, a window appears in which it shows
that the created connection has been added and in it we can
explore the databases and tables to which we have access.

Choice of tables
At this point, we can choose the tables of the database that
we are going to use in the report and the fields to be
displayed. These tables can be the sources that the SIEM has
integrated, that is, if it has a web server integrated as IIS
(Internet Information Server), we could select this table and
see the fields that can be imported into the report.

In the example that concerns us, the function of the main


report will be to host and structure all the subreports that we
will later create.

Therefore, as a data source it will not be used in the main


report, but in each of the subreports, we do not select any
table and choose Next.

Choice of template to visualize the data


Then we can choose if we want to apply a template among
the available ones or one of our own that we have
created. On this occasion, we chose No template.

After clicking Finish, the created report is displayed as a


result.
Both in the design screen and in the preview we can see
that the report has the following sections:

Report Header (EI)


Objects located in the report Header area are printed only
once, at the beginning of the report.

• Charts and cross tables located in this area contain data


for the entire report.
• Formulas located in this area are evaluated only once, at
the beginning of the report.

Page Header (EP)


Objects located in the Page Header area are printed at the
beginning of each new page.

• It is not possible to locate graphs or cross tables in this


section.
• Formulas located in this area are evaluated once per
page, at the beginning of each new page.
The created sections can be expanded / contracted easily
by moving to the final bar that delimits the sections on the
design screen.
Report template
In addition, by clicking on the right mouse button on the
report it will be possible to enter the Section Assistant.

Select the page header


Through this wizard we can Insert / Remove / Merge sections
and configure certain aspects of each section, such as color,
pagination and some other options that may be
useful. Among them we can highlight the option Delete,
which allows us to enter a formula that will be evaluated and,
if it complies, this section will not appear in the report.

Print date and Page number


When making the report through the wizard, we see that
the wizard itself includes two fields in the report: Print date
and Page number. If we look at the field explorer, we can see
how these are two of the special fields that the tool
automatically creates for each report. These fields can be
added to the report. In this case, a green mark (tick) will
appear on it.

The Print Date field has been added in the Page Header
section, which will appear on all pages of the report. Since
we only want it to be shown on the first page, we move the
text box to the Header section of the report.

Header of the report


In this case, we are going to delete the Page Number field
and instead show the Page N of M field in the report. In
addition, in the Header section of the report we introduce the
Report Title field.

Report properties
The value that the Title field of the report takes at the time of
execution is the one that is configured through the option File
→ Information Summary of the menu. A window opens
where we can enter the title. In this case, we give you
MySQL Report.

Preview report
If now we go to the Preview screen, we see how the title we
have entered appears.
Add a subreport with a chart
Through the tool's menu we can add a subreport to our
main report. For this we use the option Insert → Subreport.

How to add a subreport


We must choose between inserting an existing report or
creating one with the assistant. In this case we choose the
option to create a new one, which we will call
MysqlVolReport.

Select the tables that are needed


Click on the Report Wizard and the window to create the
connection appears and select the tables of the database to
be used. In this case, as we already have the connection
created and added to My connections, we deploy it and we
see that we have two options:

• Add a command: using this option we can build the SQL


query ourselves to perform against the data source. This
SQL query is the same as that made from the SIEM
console.
• Choose the specific tables that we are going to use,
choose the reports_running table (in our example) and
add it to selected Tables.

Select the parseed log fields


We choose Next and then the program requests the fields to
be displayed. These fields would be the fields of the parsed
log. When selected, these fields will appear included in the
subreport that we are creating.
In this case we are going to use them to create a graphic. We
are not interested in automatically including them, so click on
Next.

Locate the subreport in the report footer


Again, we can choose if we want to apply a template to the
subreport. We choose No template and click on Finish. After
this, we accept the insertion of the report by placing it in the
Piece section of the report.

Subreport
The subreport could have been added in another section,
but with this you have to be careful, because if, for example,
a subreport is included in the Detail section and some query
is made in the main report, the subreport would be printed
once by each record obtained, while if there is no query, it
will only be printed once.

As a general rule, the subreports will be included in the


Footer sections of the report.

To explore the created subreport we click on it and it will


open in a new tab. At first glance we can see that a sub-
report does not have the sections Page Header and Footer, so
if we want to add a report that we have created as a
subreport, we will have to be careful, since the Page Header
becomes Header of the report and the Footer in Foot of the
report, so that we are shown two sections of the Header of
the report and two of Foot of the report.

Remove sections that are not needed


Again, through the Section Assistant, we can delete the
sections that we will not use.
Then, using the menu option Insert → Graphic, we place
the graphic in the Header section of the report. We can see
that the tool does not allow us to include a graphic in the
Details section.

By including the graph we see the Graph Wizard where we


will be able to configure all the options of the graph through
the different tabs.
Kind
we select the type of graphic to use. In this case, we leave
the bar graph as it comes by default.
Data
among the available fields we select output_format in the
section In change of, which will appear as the x axis of the
graph, and document_id in the Show values box, so that the
count of this field appears as a summary operation. This field
will be reflected on the graph axis. By selecting the field,
within Show values, it will be possible to modify the summary
operation on the field to be displayed. In this case it is not
necessary, since by default the count appears.
Axes
we can configure aspects of the axes such as division lines,
ranges of values ... We leave the options that come by
default.
Options
we can configure general options of the graphic referring to
labels, legend, color, design ... We leave the options that
come by default.
Highlight color
we choose different colors depending on the value that the
field reports_running.output_format can have.
Text
we introduce the Title that we want to give to the graph.

Reports grouped by output format


Once the graphic is configured correctly, it is possible to
observe the result of its execution through the option Vista →
Preview of the menu. In the report creation tool it is possible
to work both on the design screen and in the preview. Each
change made in one of the screens is automatically reflected
in the other.

Graph of the reports

Example of a report with Crystal Reports


An example of generating security reports using Crystals
Reports would be the following:

Open Source
manufacturers and
solutions

introduction
This unit aims to review the main manufacturers of SIEM
solutions, seeing the main characteristics they possess
together with their cost. It also shows the open source ELK
solution (Elasticsearch Logstash Kibana) allowing the
collection and exploitation of logs for free. Finally, it is taught
how to configure the Loggly SIEM. Loggly offers a trial of 30
days, where we can integrate different sources with the
limitation of 1Gb of logs per day. It will be taught to collect
logs from a Linux system, in this case Ubuntu. Logs from the
Apache server will also be collected to be sent to the SIEM in
the cloud. Once the reception of logs is verified, a control
panel and an alert will be created.

II. goals
This unit shows the main SIEM manufacturers, together
with the open source ELK solution. It will also show how the
SIEM Loggly can be configured, integrating logs of a Linux
system and an Apache server.

III. Elasticsearch Logstash


Kibana (ELK)
The ELK is a set of applications (Elasticksearch, Logstash,
Kibana) that collect logs from a multitude of sources
(Apache, IIS, Firewall, etc.). It allows to show in a fast, clean
and orderly all the events that occur in the organization. In
addition, it has different configurable dashboards to put all
the sources, taking a global view of the entire network of the
organization.

Logstach input screen

Explanatory of how logstrach works

Search for events


The scheme will be the one that illustrates the image of the
cover where the flow of the data is seen:
• Logstash allows us to collect the logs of the devices and
forward them to Elasticsearch.
• ElasticSearch is the search engine that will allow us to
query events.
• Kibana is the front where the data will be displayed.

IV. Splunk

Figure 8.4. Splunk homepage

Source: splunk

Splunk is a sufficiently flexible tool that allows indexing all


data sources, scalable to any company and with sufficient
capacity to find and offer meaningful information for any
level of the organization. In a forensic mode, Splunk
maximizes the productivity of IT resources, optimizing the
time of attention and resolution of an incident.

Main characteristics
• Resolve incidents up to 70% faster.
• Complete view of the entire organization.
• Monitoring performance indicators by making intelligent
decisions.
• Identify trends and patterns for clients, transactions and
systems.
• Identify the log fields in a simple way.

Cost
• FREE until collecting 500 MB / day.
• To collect more logs, request a price.

V. Loggly

Figure 8.5. Loggly home page

Source: loggly

Log management service based on the cloud. Loggly makes


the process of managing logs much easier. With simple
configuration processes and intuitive tools, Loggly does not
require great knowledge for its configuration.

Main characteristics
• Unlimited custom dashboards based on any type of
search.
• Customizable alerts with triggers or triggers.
• Adaptive interface with multiple views, pages and
workspaces.
• REST API to integrate with other applications.
• Searches and unlimited users.
• Automatic filters and event analysis.
• Parse logs from any source or device.

Cost

• Lite: FREE - 200 MB / day (centralized registration of


events, searches and filters, persistent workspaces).
• Standard: $ 49 / month and up to - 1 GB / day (built-in
alerts, customizable dashboards).
ArcSight ESM

Figure 8.6. Arcsight homepage

Source: Arcsight

ArcSight ESM is a business log management solution to


identify and prioritize current and potential security
threats. ArcSight also provides ArcSight Logger, a tool for log
collection. HP ArcSight ESM offers multiple levels of service
with the ability to handle between 1,000 to 12,500 events
per second, providing options for small and large businesses,
as well as scalability for growing businesses.

Main characteristics
• Analyze logs from multiple sources.
• Link events to detect threats and assign resources.
• Automatically collect logs from devices from more than
350 sources.
• Unify all information for full visibility.
• Compliance packages for PCI DSS, SOX and IT.
• 500 compliance reports.
• It encrypts the data traffic.
• 42TB of logs stored centrally.

AlienVault

Figure 8.7. AlientVault home page


Source: AlienVault

AlienVault, creator of OSSIM, is a leading provider of


unified security management that aims to provide full
visibility of security by centralizing events. AlienVault USM
provides a real-time monitoring tool by combining the
management of logs and monitoring the security features
needed for PCI DSS compliance.
Main characteristics
• Dashboards and reports for PCI DSS (Payment Card
Industry Data Security Standard).
• Rapid deployment
• Automatic asset discovery.
• Log collection and analysis of its flow.
• Monitoring of service availability of assets.
• Vulnerability monitoring continuously.
• Detection of threats and monitoring of file integrity.
• Multiple security functions and consoles.
• Simple event management and reporting.

McAfee Enterprise Log Manager

Figure 8.8. McAfee Home Page

Source: McAfee

McAfee Enterprise Log Manager automates event


management and analysis of all types of logs, signing and
validating them ensuring authenticity and integrity. It has
dashboards and reports to comply with more than 240
standards. McAfee Enterprise Log Manager is an integrated
component of the McAfee Enterprise Security
Administrator. Log Manager stores the logs, while Enterprise
Security Manager analyzes and normalizes the analysis.
Main characteristics
• Smart logs management
• Analyze security records.
• It offers chain of custody and non-repudiation.
• It allows analyzing and searching for events.
• Difference stored logs for security and logs to analyze.
• Local store of logs or through a SAN.
• Up to 7.5 TB of storage.

GFI EventsManager

Figure 8.9. Home GFI EventsManager

Source: GFI EventsManager

GFI offers a complete data analysis platform for SIEM.

Main characteristics
• Supervision of events in real time.
• Periodic analysis of the relevant records for security.
• Real-time monitoring of security in compliance with
policies.
• Monitor availability, functionality and performance.
• Reduce downtime
• Three layers of data consolidation logs.
• Access to data through two-step authentication.
• Forensic investigations and compliance standards.

ManageEngine
EventLogAnalyzer

Figure 8.10. Main page EventlogAnalyzer

Source: EventlogAnalyzer

EventlogAnalyzer allows to automate the log collection


process and analyze events in an intelligent way, ensuring
compliance with standards.

Main characteristics
• Collect, analyze, correlate, search and report stored
logs.
• Forensic analysis of logs.
• Alerts in real time.
• Storage of logs and reports.
• Intelligent log analysis ensuring the rules that must be
met.
• Instant generation of logs.
• Multiple types of reports: user activity, historical threats
and more.

Cost
• Free up to collect 5 sources, without traffic limit.
• It starts at $ 4,495.

Configuring Loggly
Loggly offers a 30-day trial, offering 1 GB per day of
collection and with 7 days of retention.
First steps

In this trial it also offers


• Centralization of logs.
• Parameterization of searches and filters.
• Unlimited users
• Up to three types of font groups.
• Alerts
• Customized dashboards.
• Support.

Loggly
The first thing to do is register on the Loggly page. Once
logged in, the following window appears, indicating that we
have no source configured:

Source setup
Click on Add some data and we will open this window where
we can select the type of source that we want to send to the
SIEM. Keep in mind that Loggly offers a SIEM in the cloud,
that is, the logs will be sent to the Loggly central server.
Linux setup
Click on Linux to see how we can send the logs to Loggly:

Processing of logs to loggly


It shows us the command that we must insert in our Linux
machine. The command takes the download of a bash script
that will configure the syslog to send it to the Loggly
servers. As command arguments, it carries the Loggly user
and an authentication token.

Once executed it will show the following messages:

Verification of logs
As we see we are already transmitting logs to Loggly.

Now we are going to verify in the Loggly SIEM the logs:

Reception of logs
We click and see that logs are actually received:

Searching or configuration of alerts


Third step: perform searches or configure alerts.

Display of all the events that Loggly collects


Now we click on Search, and we can see all the records that
are reaching the Loggly collector:

Sending all syslog messages


We can also see that all syslog messages are sent:
Linux authentications by SSH
Also certain logs are shown that are parsed generating a
group of events called system, which correspond to the
authentications of Linux through SSH.
Font groups

Creation of source groups


Now we are going to create a group of sources to have them
sorted; for that we click on All sources and then on Create
Source Groups.

Add the host to the group and give it a name


We add the host we have collecting and we give the group
a name:

Configuring a source with Apache server


Now let's configure another source: an Apache server; for
this we go to Source Setup and select Server Side Apps:

Execute Apache server commands


First step, execute the commands:

Viewing the transmission to the Loggly collector

We execute the commands and immediately we will be


transmitting to the Loggly collector:

Scorecard and alert

Creation of a scorecard
We are going to create a scorecard, for this we click on
Dashboards and Add new Dashboard:

Add graph to the scorecard


We add the name of the control panel and then select the
type of graph we want to insert:

TOP Values of the Apache server


We can select the events of the different sources that we
want to show in the scorecard. In this case we have used a
table making a user agent topo, which are sent to the Apache
server when a request is made for a web page.

Creation of alert
Now let's create an alert. For this the easiest thing is to go to
the search area and put all the necessary conditions that
meet the condition. In our case, we will make an alert for
when a robot of a monitoring company is scanning our web
server and in the header field of the HTTP protocol leave us
the message "Internet-wide-scan-to-be-removed-from-
this -list-email-info-at-binaryedge.io " :

Save the search


We will save the search with the name of "Browsers" and it
gives us the possibility to create an alert from a search:

Alert configuration
Fill in all the fields of the alert, which will jump depending
on the search we did previously.

We e s t a b l i s h t h e t h r e s h o l d s w i t h t h e c o n t e x t
conditions. The alert will jump if this event occurs more than
10 times in a 5 minute interval.
Warning email set up
When the above conditions are met, we can notify by
email.

You might also like