sg248129 PDF
sg248129 PDF
sg248129 PDF
Implementing InfoSphere
Guardium solutions
Whei-Jen Chen
Boaz Barkai
Joe M DiPietro
Vladislav Langman
Daniel Perlov
Roy Riah
Yosef Rozenblit
Abdiel Santos
ibm.com/redbooks
International Technical Support Organization
March 2014
SG24-8129-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page xi.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Acknowledgement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . xvi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Contents v
5.4.1 Policy types and policy rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.4.2 Logging granularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.4.3 Generating real-time alerts with policy . . . . . . . . . . . . . . . . . . . . . . 163
5.4.4 Extrusion rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.4.5 Database exceptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.4.6 Policy for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.4.7 Policy installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.5 Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.5.1 Reports versus queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.5.2 Domain, entities, and attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.5.3 Query conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5.6 Compliance workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
5.7 Real-time and threshold alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.7.1 Alert generating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.7.2 Alerter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.8 Data level access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.8.1 S-TAP setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.8.2 Policy setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.8.3 Policy violation report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.9 Vulnerability assessment setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.10 Configuration audit system setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.10.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.10.2 High-level steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.10.3 Installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.10.4 Reviewing results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.10.5 Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.11 Entitlement reporting setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.11.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.11.2 High-level steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.11.3 Configuration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.11.4 Review the entitlement data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.12 Database auto-discovery setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.12.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.12.2 High-level steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.12.3 Configuration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.12.4 Viewing the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.12.5 Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.13 Sensitive data finder setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.13.1 Use cases and highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.13.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.13.3 High-level steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.13.4 Configuration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.13.5 Viewing classification process results . . . . . . . . . . . . . . . . . . . . . . 222
Contents vii
8.3.4 Centrally managed collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
8.3.5 Centrally managed aggregator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
8.3.6 Dedicated central manager (no data aggregation) . . . . . . . . . . . . . 321
Contents ix
x Deployment Guide for InfoSphere Guardium
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your
local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not infringe
any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and
verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the materials
for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any
obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made on
development-level systems and there is no guarantee that these measurements will be the same on generally
available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual
results may vary. Users of this document should verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them as
completely as possible, the examples include the names of individuals, companies, brands, and products. All of
these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is
entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any
form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs
conforming to the application programming interface for the operating platform for which the sample programs are
written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or
imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample
programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing
application programs conforming to IBM's application programming interfaces.
IBM® InfoSphere® Guardium® provides the simplest, most robust solution for
data security and data privacy by assuring the integrity of trusted information in
your data center. InfoSphere Guardium helps you reduce support costs by
automating the entire compliance auditing process across heterogeneous
environments. InfoSphere Guardium offers a flexible and scalable solution to
support varying customer architecture requirements. This IBM Redbooks®
publication provides a guide for deploying the Guardium solutions.
The guidance can help you successfully deploy and manage an IBM InfoSphere
Guardium system.
This book is intended for the system administrators and support staff who are
responsible for deploying or supporting an InfoSphere Guardium environment.
Authors
This book was produced by a team of specialists from around the world working
at the IBM Littleton Massachusetts Laboratory.
Preface xv
Abdiel Santos is a Senior Level 3 Support
Engineer for InfoSphere Guardium at the Littleton,
Massachusetts IBM Laboratory. He has worked
with the Guardium solution since 2006, handling
the most critical support issues and providing
customers with custom solutions to successfully
implement Guardium in their environments.
Acknowledgement
Thanks to the following people for their contributions to this project:
David Rozenblat
Nir Carmel
Louis Lam
Ron Ben-Natan
Amy Wong
Michael Murphy
William Pacino
IBM USA
Find out more about the residency program, browse the residency index, and
apply online at:
http://www.ibm.com/redbooks/residencies.html
Preface xvii
xviii Deployment Guide for InfoSphere Guardium
1
The Sarbanes–Oxley Act of 2002 (SOX) was passed into law that resulted from a
number of corporate scandals about a decade ago. This law changes the
financial reporting for public corporations and was the beginning of stricter
regulatory for corporate management oversight. To provide accountability of the
top management of a company, they must now individually certify the accuracy of
financial information. The law also requires external auditors to verify the
accuracy of certain financial information, such as balance sheet.
Now, most companies report their financial activity to be compliant with SOX
electronically. If we translate what this regulation means to database activity
monitoring in one sentence, it reads something similar to the following sentence:
“Monitor all changes to the financial database server to ensure that no
unauthorized transaction occurred to affect the financial results of the
company.”
This ensures that the financial integrity of the transactions that are stored in the
database server are correct and accurate for SOX reporting.
Figure 1-1 on page 3 outlines the basic DAM information that must be collected
for some of these regulations. The following terms are used in the table:
DDL: Data definition language is the schema or the container of the database.
SQL statements, such as CREATE, ALTER, and DROP, are DDL commands.
DML: Data manipulation language is the contents of the database or the data
that is stored inside. SQL statements, such as INSERT, UPDATE, and
DELETE, are DML commands.
Lifecycle
Increased Projection
Define Classify
Data Growth and Metrics
Acquisitions
Fine Monitor
Cost
Enforce Analyze
DSR
Harden Audit
Increased Risk
Empower users
s
Access Measure
Results
Customers are constantly trying to balance these challenges with the following
ultimate goals:
Increasing the overall protection of information within the environment.
Reducing the cost for compliance and security within their business.
Empowering users with information so that they can make good decisions
that positively affect the business.
Staying away from negative publicly that can result from a data breach.
Enterprise architecture
Figure 1-5 on page 11 represents an enterprise architecture for monitoring
numerous databases across multiple data centers and continents. This
architecture example consists of many collector appliances and numerous S-TAP
agents that are installed on mainframe and distributed database servers across
data centers. The S-TAP agents are configured to capture and send the relevant
database activities to the Guardium collectors for analysis, parsing, and logging.
The collectors are configured to aggregate activities that are monitored to the
respective aggregator appliance for central reporting. A dedicated Central
Manager appliance provides federated management capabilities, such as
Access Management, patching, and metadata repository.
Note: The Guardium architecture is scalable and flexible. Scaling the solution
to support more monitoring capacity for existing environments or more
environments can be achieved easily.
DB Server 1
Collector 1
STAP 1
DB Server 101
Collector 2
STAP 101
DB Server 901
Collector 10
STAP 901
Collector 1
STAP 1
DB Server 101
Collector 2
STAP 101
DB Server 901
Collector 10
STAP 901
Collector 1
STAP 1
DB Server 101
Collector 2
STAP 101
DB Server 901
Collector 10
STAP 901
Collector 1
STAP 1
DB Server 101
Collector 2
STAP 101
Load Balancer
(VIP)
DB Server 901
Collector 10
STAP 901
Note: For more information about the S-TAP configuration, see Chapter 3,
“Installation and configuration” on page 43.
SIEM
(IBM QRadar, ArcSight, RSA
Envision and so on)
SNMP Dashboards
(IBM Tivoli Netcool, HP Open view and so on)
Directory Services
(Active Directory, LDAP, TDS and so on)
Change Ticketing
Systems
Authentication (IBM Tivoli Request Manager,
(RSA SecurID, Radius, Kerberos, Remedy, Peregrine and so on)
LDAP)
Vulnerability
Data Classification Standards
and Leak Protection IBM InfoSphere (CVE, STIG, CIS Benchmark)
(Credit card, social security, phone, Guardium
custom and so on)
Security Management
Long Term Storage Platforms
(IBM Tivoli Storage Manager, IBM (IBM QRadar, McAfee ePO )
Nettezza, EMC Centera, FTP, SCP
and so on) • STAP
Application Servers
Software Deployment (IBM WebSphere, IBM Cognos, Oracle
(IBM Tivoli Provisioning Manager, RPM, Native EBS, SAP, Siebel, PeopleSoft and so on)
Distributions)
For more information about InfoSphere Guardium integration with IBM products,
see Chapter 11, “Integration with other IBM products” on page 391.
This chapter provides insight and experience that is based on many successful
implementations. The implementation process is described in greater detail in
later chapters of this book.
The (DAM) type focuses on monitoring, reporting, and real-time alerting of all
access and extrusion activities that are observed. The advanced database
activity monitoring adds security-driven data level access control (DLAC)
components to the mix (that is, blocking, and masking functionality).
It is important to review and understand these components before you start the
implementation. In the following sections of this chapter, we describe
implementation considerations that are relevant to these deployment types.
Starting with DAM deployment first should also apply to customers that are
interested in DAM and Vulnerability Assessment deployments, unless there is a
compelling need to address the VA deployment first or in parallel. DAM takes
longer to deploy, but it addresses the installation of the appliances and agents
that are also required by the VA. After the solution-installed components are in
place, you can proceed in parallel with DAM and VA, following the flow in each of
the deployments that calls for the basic components first, then followed by the
advanced product components.
Figure 2-2 shows the main product functionality components of the solution.
Data Level
Access
Control
Discovery & (DLAC)
Classification Security
Entitlement
Reporting Vulnerability
Assessment
Database Acvity
Monitoring (DAM) Change
Audit
System
Advanced
Work Flow
Automation
Enterprise
Integrator
When you are sizing a Guardium solution, there is a need to make experienced
assumptions that can translate to a deployment topology and sizing of the
solution. We review the more important aspects of these assumptions.
Keep in mind that before you implement the solution, there is practical method of
knowing how much information you are processing. All applications are different,
provide different services to their users, and differ in size and scope. You know
only the exact volumes after you implement the solution and have visibility into
the volumes of information that are generated by your applications and users on
the databases you are configured to monitor.
The following audit levels are important to discuss when you are planning an
implementation. Audit level decisions should be taken into account when you are
forming your monitoring policy:
Privileged user audit
Audit-only specific users and ignore all other connections; the audited users
should be a finite list of non-application users (meaning, real people and not
application traffic). In this mode, S-TAP filters many of the sessions and only a
small subset of the overall traffic is sent to the Guardium appliance (filtering is
done on the session level by S-TAP).
The following sections provide more insight into decisions on how to calculate the
number of collectors that are required to monitor your environment with the
consideration of the audit levels.
Where there is not enough information to assess the collectors needed based on
these factors, consider a ratio of ten (10) database servers per collector
appliance as a good starting point. Optimization based on actual traffic can be
done at a later time.
As you might conclude, it is difficult to factor in all the considerations and make
an informed decision before you start your implementation because some of the
factors are unknown at this stage of the implementation.
When you are setting up final monitoring details, learning activity patterns, and
seeing volumes being logged, you can adjust the retention periods or the
collector-to-aggregator ratio. This process is expected in every implementation
and must be considered when you are planning an implementation.
Note: Though calculating the exact size required during planning and initial
implementation is not always possible, you can adjust sizing factors later as
follows:
Increase the aggregators database size or disk allocation (only possible
when building the appliance)
Add aggregators to an already existing architecture
Change retention periods on aggregators anytime
For more information about Aggregator sizing, see 3.4.3, “Aggregator sizing” on
page 52.
Note: These are only a few examples of why you might need more than one
Central Manager. There are other methods to meet the separation-of-duty
requirements. In a large federated environment, to provide separate access to
information by users, you can use the ready-to-use data level control
functionality that controls what each user of the solution is allowed to see that
is based on the user and data profiling.
Note: The ability of the S-TAP agent to relay (communicate) the activity it
captures to the Collector depends on the performance of the network.
When the network is not performing or the network bandwidth is not
sufficient to sustain the need, this real-time communication data is lost.
Note: There are more port configuration items to consider then the
listed items when you are determining which port configuration makes
most sense for your environment. Therefore, we recommend that you
discuss this issue with your network administrators. The important point
here is to plan for port configuration up front so that you can prepare for
things, such as cabling (hardware appliances), IP addresses, and DNS
entry configurations.
Note: Do not assume that you need encryption because it sounds better.
Instead, assess what this means to you and whether it makes sense in
your environment. If you determine that you want to proceed by using this
option, Guardium fully supports this option and there are configuration
ramifications that you must plan.
http://pic.dhe.ibm.com/infocenter/igsec/v1/index.jsp
DB Server 101
DB Server 101
DB Server 901
DB Server 901
Collector 10 Collector 10
STAP 109 STAP 901
2. Failover 2. Grid
DB Server 1
DB Server 1
Collector 1 STAP 1
Collector 1
STAP 1
DB Server 101
DB Server 101
Collector 2
STAP 101
Load Balancer
(VIP)
DB Server 901
DB Server 901
Collector 10 Collector 10
STAP 109 STAP 901
The following failover configuration options for S-TAP are shown in Figure 2-3:
Basic
This basic configuration assumes that S-TAP always sends all of the traffic to
one collector and there is no failover configured. Configuring the S-TAP with
only one primary collector as recipient of traffic is not suggested unless there
is only one collector available and accessible.
Failover
This configuration option assumes that S-TAP sends traffic to one collector
(primary) and failover to one or more collectors (secondary, thirdly, and so on)
as needed. This is the most common method that is used today.
In this configuration, the S-TAP agents are configured with at least a primary
and one secondary collector IP. If the S-TAP agent cannot send the traffic to
one collector for various reasons, the S-TAP agent automatically reverts to the
other.
A Guardium solution consists of many applications and options that all work
together. To assure that you are not loosing focus on the initial goal of setting up
a working Guardium solution, we advise you to focus on configuring the options
that allow you to reach your basic monitoring goals first. Other functionality
components that are not aligned with the basic monitoring should be phased in
later stage of the implementation. A good example is Data level access control
functionality. This functionality takes the monitoring and protecting your
environment to another level. The data level access control allows you to block
suspected activities that are inconsistent with your policy rules from performing
on your database server. This means that configuring database blocking should
be done after activity monitoring is done and you can determine what must be
blocked. This type of functionality should be labeled as additional functionality
and planned for a latter phase of the implementation. When you are planning a
Guardium implementation, you must determine when each functionality
component should be implemented.
Test Cycle
Production Roll-Out
Steady State
As shown in Figure 2-5 on page 37, the activities in both tracks can run in parallel
with an understanding that you cannot deploy and test any of the monitoring
setup plan before the appliances and database agents are installed. Therefore,
start the implementation with the installation and configuration track first.
Determine the schedule of events and activities of this track first and follow up
with the monitoring setup and verification track immediately afterward, taking into
account the scheduling of the installation in your planning. (See the activities
chart that is shown in Figure 2-5 on page 37 as a reference.) In the following
section, we describe the roles and responsibilities of the personnel that you must
assign to these activities.
Note: The Implementation schedule that is shown in Figure 2-5 also shows
the following recommended education activities:
Technical training as one of the installation and configuration activities.
This training provides your technical resources a comprehensive education
of the product functionality.
Monitoring and report building workshop as one of the monitoring and
verification activities. This workshop provides the basic knowledge for the
team to be working on monitoring setup to understand the basic monitoring
applications.
These sessions are facilitated by IBM or IBM partner and have the following
results:
Better understanding by the customer of their tasks and responsibilities
Deployment plan
Project plan
The following participants and their roles are included in the planning phase:
Information security and data security compliance
DBA and system administrators
Network administration
Designated Guardium administrators
IBM or IBM Partner to facilitate the workshop
When you are planning a deployment, always assume that you need more
capacity than you think for two reasons: environment growth and activity growth.
Because you do not know the monitoring data volumes until you determine the
granularity of monitoring and have the visibility of the traffic volumes, plan more
appliance capacity than you estimate and try to keep appliances at usage rates
that are between 50% - 60%.
After the solution is deployed, the enterprise reports that are available on the
central manager provide you the metrics that you can use to assess collectors
performance, capacity, and repository available space. This is information that
you need to assess whether you have the capacity to add database monitoring to
a collector or must plan expansion by adding collectors.
The amount of traffic that is logged and sent from the collectors to the
aggregators and the retention needs of the aggregators determine whether you
need more aggregators.
Note: Because of overlap, activities for DAM and VA are listed. However,
these features are often installed in separate phases; for example, first DAM,
then later VA, or vice versa.
These sessions are facilitated by IBM or and IBM partner, and garner the
following results:
Better understanding by the customer of their tasks and responsibilities
Deployment plan
Project plan
The following participants and their roles are part of the planning phase:
Information security and data security compliance
DBA and system administrators
Network administration
Designated Guardium administrators
IBM or IBM Business Partner to facilitate the workshop
Physical Virtual
Virtual Image
Customer-provided hardware on
“Software” Customer-provided Virtual Host
For more information about the technical requirements of the installation of the
IBM InfoSphere Guardium V9.1 Software Appliance, see IBM InfoSphere
Guardium V9.1 Software Appliance Technical Requirements, 7039720. This
document also includes a list of certified hardware platforms and is available at
this website:
http://www.ibm.com/support/docview.wss?&uid=swg27039720
Collector
The collector is the workhorse appliance in the Guardium DAM solution and is
used for real-time capture and analysis of the database activity.
Note: The collector must be network-close (that is, minimal hops) and have
LAN speed connectivity to the S-TAPs to reduce network latency. If the
collector is built as a software appliance, it should have low latency disk I/O.
Aggregator
The aggregator appliance is used to offload reporting activity from the collectors,
and to provide consolidated reporting from multiple collectors. The aggregator
does not collect data from S-TAPs. Instead, it receives the data from the
collectors in a nightly batch file.
Although not typical, an aggregator can receive data from other aggregators to
provide enterprise-wide report. This is referred to as second-level aggregation.
Central manager
The Central Manager (CM) is specialized functionality that is enabled on an
aggregator appliance. The CM function is used to manage and control multiple
Guardium appliances, which is referred to as a managed environment. This
function provides patch installation, software updates, and the configuration of
queries, reports, groups, users, policies, and so on.
Note: Although a CM can be placed over the WAN, it should have less than
200 ms network round-trip latency to its managed units.
Although you can add or remove fields from the template, the following fields are
key:
Business application such as SAP
Location (for example, Datacenter_1)
DB Server host name
DB Server IP
InCluster
OS Type (for example, AIX®)
OS Version
CPU Cores
VU per Core*
PVU Total (that is, CPU Cores x VU per Core)
DB Type (for example, DB2)
DB version
DB Instance name
DB listener port
DB install dir
Primary Collector name
Primary Collector IP
Collector Secondary name
Collector Secondary IP
Aggregator
Value Unit (VU) is an IBM metric that is used to gauge the capacity of the
database server. To determine the VUs per core that is based on the database
server make and model, see the processor value unit listing that is available at
this website:
http://www.ibm.com/software/lotus/passportadvantage/pvu_licensing_for_c
ustomers.html
Tip: Some users find it helpful to add a separate worksheet to the database
inventory workbook that contains the list and details for the appliances.
Assign collectors
The next step is to assign a collector to each database server (or S-TAP). This
assignment considers the following factors:
Collector should be network-close to the S-TAP, specifically:
– LAN-speed connectivity between S-TAP and collector
– Minimal network hops to minimize network latency
Mainframe Collector
Comprehensive 110 55
Note: The monitoring capacity of a virtual collector often is less than that for a
physical collector because of the shared-everything architecture of the VM
host.
Note: The aggregator should have a minimum of 40% free disk space to
provide room to create backup files, which are removed when they are
successfully copied off to a designated storage server.
Managed configuration
If a CM is enabled, the deployment is referred to as a managed environment
(also know as a federated environment). To complete the managed environment,
the other appliances, collectors, and aggregators are registered with the CM.
Before a unit (appliance) is registered, a common shared secret (password) must
be stored on the CM and the to-be registered unit.
All managed (registered) units inherit the license key from the CM. Users, roles,
and other definitions are synchronized from the CM across its managed units.
Stand-alone configuration
The simplest deployment configuration is a single collector that is receiving data
from several S-TAPs. This configuration is referred to as a stand-alone because a
CM is not used. It is a typical configuration for a test environment.
Hybrid configuration
A hybrid configuration is a combination of managed appliances with stand-alone
collectors. These stand-alone collectors can, in turn, upload their data nightly to
managed aggregators.
3.5.2 Contingency
The appliances can be configured to minimize data and functionality loss through
failover or redundancy. They also can be used for scheduled maintenance.
The failover plan is in addition to the recommended periodic backups and daily
archiving.
Collector
The following contingency plans can be used for collectors:
S-TAP failover
An S-TAP can be configured to fail over (start communicating with) to a
secondary or tertiary collector if the primary collector is unreachable. When
the primary collector is reachable, the S-TAP reverts to it.
The S-TAP also uses a limited memory buffer (spill file on the z/OS) to
temporarily buffer data that is in transit to the collector.
S-TAP Mirroring
If a collector fails, the data since the last daily export or archive is lost. To
avoid any loss, the S-TAP can be configured to mirror its transmission to two
collectors, so each collector receives the same copy of the data.
Aggregator redundancy
Each collector can be configured with a secondary or standby aggregator to
which automatically send its daily files should the primary aggregator be
unreachable.
CM redundancy
A failed or unavailable CM affects interactive use of the Guardium application;
users cannot use most of the functions from a managed unit, such as reporting.
Also, no application definitions or user changes can be made until the CM is
available.
It is a manual process to make the backup the primary CM and involves the use
of command-line interface (CLI) commands. After the switch occurs, the
managed nodes automatically detect the switch and reconfigure themselves to
communicate with the new CM.
3.5.3 Networking
By using the physical appliance as a reference, each appliance has at least four
network interfaces. A typical configuration is to configure the first port (eth0) in
the network. However, it is possible to have the following supplemental
configuration (eth0 is always required):
IP bonding (teaming): IP bond eth0 and eth3 for network redundancy
Management (secondary) interface: Configure eth3 as a management
interface; for example:
– In the case of the collector, S-TAP traffic communicates over eth0, but user
and CM communication occur over eth3.
– Appliance backup and archive data can be routed over eth3 that is
connected to a data network.
– There is an option to add a static route to facilitate routing through eth3.
Firewalls
Depending on the Guardium feature that is used, specific network ports are
required to be open across a firewall. For more information about ports, protocol,
and directionality, see the Guardium Ports Requirement document.
The database inventory template is a useful aid for tracking these decisions.
S-TAPs-to-collectors
The following contingency plan is used for the S-TAPs-to-collectors:
S-TAP Failover: Small environment
For a few co-located collectors, each collector can be designated as the
failover for another collector. For this scheme to work, the following
prerequisites must be met:
– Each collector must have approximately 50% capacity head-room; that is,
not maxed to its PVU limit. Otherwise, the failover can result in the
secondary collector failing because of overload.
– Each collector should be co-located or network-close.
– Plan to alert-on and correct the failover condition as soon as possible to
avoid the failover collector running in a degraded mode for an extended
period.
S-TAP Failover: Medium and large environment
Although the small environment approach can be used in a medium
environment or one where there are a few collectors in different data centers,
it can become too complex to plan and track for larger implementations.
Instead, use one standby per a group of X collectors, where X can be four or
higher; however, if more than one or two collectors from the group fail at the
same time, the standby might fail because of overload.
Note: If more than one or two collectors from the group fail, the standby
might fail because of overload.
It should also consist of one of each type of database server (or representative
sample) to be monitored in production with similar OS-and-DBMS configuration,
network configuration; for example, intranet and DMZ, and clustering or zoned
configuration.
3.5.6 Licensing
Guardium product features or entitlements are enabled through specific product
keys or licenses that are installed through the application interface. The following
types of keys are available:
Base key (also known as reset key)
Append key, which is appended to base
A base key with at least one append key must be installed to enable Guardium
features.
Append key
An append key consists of five keys and requires a base key to be installed.
Multiple append keys can be applied; for example, Central Management plus
DAM Standard plus VA Advanced. The append keys are available for the
following features:
Central Management
DAM Standard
DAM Advanced (also includes all features in DAM Standard)
VA Standard
VA Advanced (also includes all features in VA Standard)
For virtual or software appliances, use the following instructions in the IBM
InfoSphere Guardium Software Appliance Installation Guide:
Configure the virtual guests (or hardware servers)
Install the Guardium application
http://pic.dhe.ibm.com/infocenter/igsec/v1/index.jsp?topic=%2Fcom.ibm.g
uardium.software.app.install.doc%2FtopicsV90%2Fsoftware_appliance_insta
llation_guide.html
Note: If you are building multiple VMs, you can clone the first appliance and
use it to build the remaining appliances of the same mode.
At this stage, the appliances are powered, cabled to the network, and ready to be
configured.
The appliance configuration steps are grouped in the following three phases to
facilitate the scheduling of large deployments and to enable deploying agents in
parallel with the appliance configuration:
Basic configuration
Advanced configuration I
Advanced configuration II
Preparation
Before you perform the configuration collect process, prepare the following
information for each appliance:
CLI default password 1
CLI new password (on first login, you must change the default password)
Appliance primary IP address
Appliance primary network mask
1
The default CLI password for IBM-provided hardware appliances is obtained from IBM Support or
IBM Sales. For VM appliances, the CLI password is set by the person who is building the VM
image.
Note: Pre-configured hardware that is shipped from IBM has the license
key preinstalled.
Tip: For a list of available CLI commands; enter comm at the CLI prompt to
see a list of all available CLI commands.
You also can enter comm <string> to see a list of all commands that contain
that string.
The appliance’s web-based graphical user interface (GUI) also can now be
accessed by using a web browser and the admin2 account, as shown in the
following example:
https://<host_name.domain_name-or-IP-address>:8443
Tip: The CLI is accessed by using only the cli or guardcli1 through
guardcli5 accounts.
2
Similar to the default CLI password, the admin and accessmgr default passwords for IBM-provided
hardware appliances is obtained from IBM Support or IBM Sales. For VM appliances, they are set
by the person who is building the VM image. On first login, you are prompted to change the default
password.
2. Set a common shared secret on the CM and the managed units, as shown in
the following example:
store system shared secret <created_shared_secret>
3. View and change the time zone, if needed (do not change the time zone and
host name in the same CLI session).
Run the following command to display the current time zone:
show system clock timezone
If the time zone is incorrect, display a list of valid time zones by running the
following command:
store system clock timezone list
Choose the appropriate time zone from the list and set it by using the
following command:
store system clock timezone <selected time zone>
Note: When a new time zone is set up, internal services restart and any
configured data monitoring is disabled during this restart.
Note: Patches are applied first to the Central Manger, followed by the
aggregators, and then the collectors.
For more information, see the “How to install patches” section of the How-to
Guide Overview chapter of the Help Book Guardium.
Tip: During the patch installation by using the CLI, you can choose to skip the
pre-patch backup.
This section describes the installation of the GIM and S-TAP agents on the Linux,
UNIX, and Windows platforms.
As of this writing, the GIM client is available only for the Linux, UNIX, and
Windows platforms.
GIM server
The GIM server is installed as part of the Guardium application on the appliance
and provides the user interface (UI). By using the GIM UI, the user can install,
uninstall, and upgrade Guardium bundles and modules and provide feedback
about database servers, installed modules, and statuses.
An administration user can interact with GIM through the GIM CLI commands or
the GUI.
GIM client
The GIM client application must be installed manually for the first time on the
database server machines. The GIM client registers with the GIM server, starts
requests to check for software updates, installs the new software, updates
module parameters, and uninstall modules.
Tip: Create a simple shell script to minimize typographical errors and for
reuse on other servers.
4. Run the following command to validate that the GIM client is running:
ps -ef | grep gim
You should see two processes, as shown in Figure 3-8 on page 67.
5. After a brief wait, the client registers with the Guardium appliance.
5. Accept the default complete setup-option (or choose the Custom option to
change the default installation directory).
6. Choose which Perl distribution to use. For example, Yes to use the Perl that is
bundled with the GIM client, as shown in Figure 3-10 on page 68.
Verifying that the GIM client registered with its GIM server
To verify that the GIM client registered its GIM server, log in as the admin user to
the GUI of the Guardium appliance to which the GIM client is reporting (that is,
the stand-alone collector or the Central Manager) and click Administration
Console tab → Module Installation → Process Monitoring.
Note: A UNIX or Linux GIM client also has its supervisor process that is listed,
as shown in Figure 3-11 on page 69.
3.7.2 S-TAP
The Guardium S-TAP is a lightweight software agent that is installed on a
database server. It monitors database traffic in real time and forwards that
information to a Guardium collector appliance.
In this section, we describe the use of the GIM to deploy the S-TAP. Although the
S-TAP can be installed directly on the database server, it is recommended to use
the GIM.
Figure 3-12 Browse and Upload GIM bundle to the GIM server
c. Import the files by clicking the check mark icon next to each file name, as
shown in Figure 3-13.
4. Select the clients (database servers) to configure and provide the following
parameters (as shown in Figure 3-17 on page 73):
– KTAP_LIVE_UPDATE: Entering y enables the KTAP update without
requiring a server reboot.
– STAP_SQLGUARD_IP: The IP address or FQDN of the primary collector
to which this STAP communicates.
– STAP_TAP_IP: The IP address or FQDN of the database server or node
on which the STAP is being installed.
– (Optional) KTAP_ALLOW_MODULE_COMBOS: Entering Y (default is N)
applies to Linux only and often is recommended.
Tips: The parameters are listed in alphabetical order. Use the horizontal
scroll bar to browse the parameter list. By pointing to fields, you can see
the permitted values.
Figure 3-18 Enter now to schedule the S-TAP installation to start now
7. Verify the S-TAP module installation status by completing the following steps:
a. Click the i icon that is next to the client name or browse to Administration
Console → Module Installation → Setup By Client and click the i icon
that is next to the client.
b. In the status window, verify that the status is “Pending-Install”, “PI”
(Pending Install), or “Installed”.
c. Click Refresh to refresh the status.
d. Wait until the status changes to “Installed”. You can also check the status
of the GIM Events report by browsing to Guardium Monitor tab → GIM
Events List.
For DB2 and Informix® on Linux, install and configure the ATAP to allow
monitoring of local (shared memory) connections. For more information,
see the “S-TAP” chapter of the Help Book Guardium.
In both of these instances, the KTAP and S-TAP do not monitor database
activity until an Inspection Engine is configured.
Note: Until the instances are restarted, only local connections (that is,
through named pipes or shared memory) are monitored (with inspection
engines configured).
8. On the database server, the following three new services should be started,
as shown in Figure 3-20 on page 76:
– GUARDIUM Database Monitor
– GUARDIUM_STAP
– GUARDIUM DC Connector3
Guardium provides the following methods that can be used to script various
functions:
Silent or non-interactive installers:
– Input (arguments) is passed on the command line (default is interactive
mode).
– Use the silent mode to incorporate the agents installation into a software
distribution package, such as SMS or SCCM:
• GIM Client command-line options: For more information, see the “GIM
Installation” section of the Guardium Installation Manager chapter of
the Help Book Guardium.
• S-TAP command-line options (only if the GIM is not used): For more
information, see the UNIX and Windows S-TAP sections of the S-TAP
chapter of the Help Book Guardium.
GuardAPI:
– Provides access to Guardium functionality by using the grdapi()
functions, which are started by using the CLI. For more information, see
the “Appendices” chapter of the Help Book Guardium for a complete list of
the grdapi() functions.
– To start multiple grdapi() commands, for example:
• A user prepares several grdapi() calls, then pastes these prepared
statements into the CLI session.
• Start a script that contains the grdapi() statements by using an SSH
client to connect to the CLI and run the statements, as shown in the
following example:
3
Guardium DC service collects updates of user accounts (SIDs and user names) from the primary
domain controller and then signals the changes to Guardium_S-TAP to update the S-TAP internal
SID and UserName map.
Complete the following steps to configure the S-TAP and its inspection engine:
1. Log in as admin to the collector to which the recently installed S-TAP is
reporting by using the following URL:
https://<collector_ip_address-or-collector-fqdn>:8443/
2. Browse to Administration Console → Local Taps → S-TAP Control.
3. Check whether the S-TAPs are listed and have a green status icon.
Otherwise, click Refresh.
4. Click the Edit icon for the S-TAP (as shown in Figure 3-21 on page 78) and
modify the following sections:
– Details:
• (Optional) alternative IPs: Add any virtual IPs that are used to connect
to the database on the host. For more information, see the Help Book
Guardium.
• (Windows only) Shared Mem. Monitor: If the database is a 32-bit
Microsoft SQL Server version, verify that the MSSQL option is
checked.
– Guardium Hosts: (Optional) Add the IP address or FQDN of the secondary
(failover) collector for this S-TAP. Click Add.
Figure 3-24 shows the managed unit listing on the CM, including the
“Distribute...” functions. In a large managed environment, you can group
managed units on the Central Manager to help with patch distribution or
configuration. A managed unit or appliance can belong to more than one group.
4. Select the new group All_Aggregators and click Update groups to add the
selected aggregators to this new group, as shown in Figure 3-27.
3.9.1 Configuration
In this section, we describe the following common configuration options:
Alerter
IP-to-hostname aliasing
Global profile
Log in as admin to the GUI on the CM or stand-alone collector and browse to the
Administration Console tab → Configuration section and configure alerter,
IP-to-hostname aliasing, and global profile.
Global profile
This is an optional configuration step. The global profile section contains
configuration settings for various features, but only the following settings are
described:
Check Use aliases in reports.... to show aliases; for example, IP-hostname
aliases, by default.
Add a suitable PDF footer text; for example, “Copyright <your company
name>”, which is printed on the footer of the output PDF reports.
(Optional) Clear the Disable accordion menus option.
Add a suitable login message for your organization; for example, a message
that notifies the user that unauthorized access of this system is prohibited.
Select Show login message to enable the previously entered login message.
Enable the Concurrent login from... option to prevent concurrent logins by
the same account from different IP addresses.
(Optional) Use the Upload logo image option to upload a file that contains
your corporate logo that is to be displayed in the upper right corner of the
Guardium window (which replaces the default IBM logo).
Figure 3-30 on page 86 shows examples of the options for configuration items
that include associated schedules.
Figure 3-31 on page 87 shows the Distribute Configuration menu with the
configuration and schedule options to distribute selected configurations.
Note: The distribute configuration option distributes a copy of the settings from
the CM for the items that are selected to the selected managed units.
The following interfaces (protocols) are available for transferring these files from
the appliances:
FTP
Secure Copy (SCP)
Tivoli Storage Manager
EMC/Centera
The appliance often has the most recent number of days of data online, where
the number of days is determined by your configuration. As a suggested starting
point (which can be adjusted later) set the number of days to one of the following
values:
Less than or equal to 15 days on a managed collector
30 - 60 days on an aggregator
The archive, followed by the purge, is run daily on the appliance and the archived
files include the following characteristics:
Compressed and encrypted before they are moved to the external storage
Available for use in recovering the appliance
Can be retained offline, depending on your corporate offline data retention
policy, and later restored for forensic investigation
Aggregator configuration
Complete the following steps to configure an aggregator to archive data by using
SCP or FTP:
1. Log in as admin to the GUI on the CM (or stand-alone collector) and browse
to Administration Console tab → Data Management → Data Archive.
Collector configuration
The steps to configure daily archives on the collector is similar to those for the
aggregator, except that the archive is scheduled to occur after the data export.
Therefore, schedule the archive to start at 3:00 a.m.
The backup is written to a single file that is compressed and encrypted and sent
to the specified destination by using the transfer method that is configured for
backups on the appliance.
Note: Aggregation does not summarize or roll-up the data. Instead, it merges
the records.
The data is transferred through daily batch files by using SCP. A daily data export
is scheduled on the source and a corresponding data import is scheduled on the
aggregator. There is an option to use a secondary aggregator in case the
primary aggregator is unreachable.
Note: The export and import are manually synchronized. That is, import is
scheduled to occur after all sources are expected on the aggregator.
Aggregator configuration
Complete the following steps to configure an aggregator to import and aggregate
data:
1. Log in as admin to the GUI on the aggregator and browse to Administration
Console tab → Data Management → Data Import.
Select Import data from and click Apply.
Click Modify Schedule... and schedule the import to occur daily at 2:30 a.m.
EST.
This schedule allows up to two hours for the export files from the collectors to
be ready on the aggregator.
Tip: The import start time can be adjusted later in production to more closely
match the actual wait time. Click Guardium Monitor → Aggregation/Archive
Log to view the actual duration over a period of several days.
Note: Although you can selectively distribute the configuration from the CM,
you must schedule the data export individually on each collector by using the
GUI.
1. Log in as admin to the GUI on the collector and from the Administration
Console tab, click Data Management → Data Export.
2. Select Export and complete the following steps:
a. Enter 1 in the Archive data older... field and 2 in the Ignore data older...
field to collect the previous day’s data only.
b. Verify that the Export Values option is selected (it is selected by default).
c. For Host, enter the fully qualified host name or IP address of the
aggregator.
d. (Optional) For Secondary Host, enter the fully qualified host name or IP
address of the secondary aggregator.
3. The purge option already should reflect the settings that were selected during
the archive configuration. If they are not set, complete the following steps:
a. Verify that the Purge option is selected (it is selected by default).
b. Update the Purge data older... field to the suggested initial values.
c. Clear the Allow purge without... option (by default, this option is selected to
allow purging until export or archive is configured).
4. Click Apply. An attempt is made to send a test file to the specified aggregator,
and, if successful, the configuration is saved. Also, the Scheduling
configuration is enabled.
5. For scheduling, click Modify Schedule and schedule the data export to occur
daily at 12:30 a.m.
This setting allows up to two hours for the export files from the collector to be
prepared and sent to the aggregator before the aggregator starts its data
import.
Note: Avoid scheduling any jobs between midnight and 12:30 a.m. to allow
the appliance to complete its start-of-day processing.
3.9.8 Self-monitoring
By using Guardium, you can set appliances to monitor themselves
(self-monitoring) to ensure that the Guardium solution is available, functioning
properly, and to alert users of problems.
The following approaches for self-monitoring are available and supplement each
other:
Threshold alerts (which are also known as correlation alerts):
– Use queries to check key measures (for example, processor usage)
against a specified threshold and alert if the threshold is breached.
– The alert can be emailed, sent to syslog for forwarding, sent as an SNMP
trap, or sent to a custom alerting, user-provided Java class.
– Are automatically distributed to all managed units from the CM where they
are created and activated.
– Guardium provides several predefined threshold alerts and the user can
also create their own threshold alerts. The predefined alerts must be
configured and activated, as needed.
SNMP polling:
– Poll the appliance by using a combination of standard and custom metrics
that use the object identifiers (OIDs) that are published by the
UCD-SNMP-MIB and HOSTRESOURCES-MIB.
– Each appliance must be polled individually.
In this section, we describe configuring threshold alerts with email delivery. For
more information about configuring SNMP polling, see the Help Book Guardium.
6. After all of the receivers are added, click Apply to save the changes, including
the activation of this alert.
7. When an alert is active, it is listed in the Anomaly Detection. To view the alert,
click Administration Console → Anomaly Detection.
8. Repeat these steps for each of the other predefined alerts that are described
in “Recommended self-monitoring threshold alerts” on page 96. For the
Scheduled Jobs Exception alert, change the run frequency to 60 minutes.
Figure 3-35 on page 99 shows the user accounts with email addresses to add as
email receivers.
Figure 3-36 shows the list of active threshold alerts that are managed by the
Anomaly Detection process.
Figure 3-36 List of active threshold alerts managed by the Anomaly Detection process
2. Create and activate an alert for this query by clicking New on the Tools →
Config & Control → Alert Builder.
Figure 3-38 on page 101 shows the custom Sniffer restart alert with key fields
highlighted.
2. Create a similar query that is named -Collector MySQL disk usage. However,
instead of using System Var Disk Usage, use the Mysql Disk Usage column.
3. Create and activate an alert for each query by clicking New on the Tools →
Config & Control → Alert Builder and with the appropriate values for the
aggregator.
Figure 3-40 on page 103 shows the aggregator disk space usage alert.
If the appliance is down or incapacitated, however, the alerts might not run or be
delivered. In this case, it is important to also monitor the appliance externally by
using one of the following methods:
For VA to scan a database, the appliance from which the assessment is run
connects to the database by using a minimum-necessary privileges database
account through a JDBC connection. This connection is referred to as a data
source.
Tests are added or updated by using a downloaded file that is available quarterly
on IBM Fix Central. This file is referred to as the Database Protection Knowledge
base Subscription (DPS).
For more information about VA and CAS usage, see the “Assess and Harden”
chapter of the Help Book Guardium.
Guardium provides the scripts per database platform, which has the
minimum-necessary privileges to create the database role that is needed to
successfully run the scan.
The scripts are available for download from IBM Fix Central (first, check Fix
Central for an updated version of the scripts, if any) or from IBM Passport
Advantage® portal. For example, search for the following information
“InfoSphere_Guardium_Database_User_Role_Definitions”
The DBA should review the script and be comfortable with the privileges granted,
which are primarily read-only against selected catalog objects.
After the script is run by the DBA in the database instance to be scanned, the
DBA provides the user name and password with other database configuration
settings to create the Guardium data source.
In a managed environment, the data sources are created and stored on the CM
but available to the managed units. Complete the following steps:
1. Log in as admin to the GUI on the CM or stand-alone collector and browse to
Tools tab → Config & Control → Datasource Definitions.
2. In the Application Selection window, select Security Assessment and click
Next.
3. In the Datasource Finder window, click New to open the Datasource
Definition window.
The inputs vary slightly based on the database type that is selected, but most
of the information is provided by the DBA.
Figure 3-42 on page 107 shows a data source definition for an Oracle instance
that uses the custom (not open source) JDBC driver.
Tip: Use a naming standard for the data source names to easily identify from
the name which environment, database platform, server, instance, and user,
the data source is for; for example, prod_ora_dbsrv1_orainst_grduser.
Note: Use the grdapi data source functions to help create many data
sources.
3.10.3 VA tests
An assessment is a selected list of database platform-specific tests that can be a
combination of the following tests:
IBM-provided (predefined) tests:
– Database platform-specific SQL-based tests
– Database-specific Common Vulnerabilities Exposures (CVE) tests, which
are is maintained by the MITRE organization
– Non-DBMS-specific Observed tests (rarely used)
User-created (custom) tests that use SQL or procedural SQL; for example,
TSQL.
Oracle No Roles with the Admin This test checks whether Oracle privileges were
Option granted with the ADMIN option to users with no DBA
role, which allows the grantee to make grants to other
users.
The ADMIN option reduces administrative control and
creates an unwarranted vulnerability.
DB2 No Individual User Objects This test checks for object privileges that are granted to
Privileges individual users. Privileges that are granted to
individual users are difficult to maintain and create risk
of misuse. Roles of user groups should be created by
the DB administrator and privilege grants made instead
to these roles or groups.
MSSQLSERVER No Select Privileges On This test checks for grants of the SELECT privilege on
System Tables/Views in system tables in application databases. Users with
Application Databases these privileges can access sensitive information about
other users’ objects or data.
Tip: Create a report of all the available tests by database platform by using the
Tools tab and clicking Report Building → VA Test Tracking domain in a
similar format as shown in Figure 3-5 on page 109.
To work with this limitation, clone the assessment and add a portion of the
data sources to one assessment and the remainder to the clone.
Figure 3-43 on page 111 shows the creation of an assessment and assigning a
data source (or multiple data sources).
Figure 3-44 on page 112 shows the test selection process with other key test
characteristics.
Running an assessment
Complete the following steps to immediately run an assessment and view the
results by using the predefined VA report:
1. Select the assessment in the Security Assessment Finder and click Run
Once Now to queue the test for run.
As shown in Figure 3-41 on page 105, the listener process schedules the
assessment to start within a minute or so if no other assessment is running.
2. After a few minutes (to allow sufficient time for the assessment to be
scheduled and to complete), click View Results to start the predefined
Security Assessment Results report.
Tip: To view the queue and run status, including how long it took the
assessment to complete, browse to the Guardium Monitor tab and click
Guardium Job Queue. However, you must return to the Security
Assessment Builder to view the results
Figure 3-46 on page 114 shows the result details (if any) for a selected test from
the predefined assessment report.
Figure 3-47 Custom query for a VA report that is selecting specific columns for tests with a PASS status
Figure 3-48 Application Context help icon to get help on the current topic
Figure 3-49 Accessing the Help System to search or save the product help in PDF format
http://www-01.ibm.com/software/lotus/passportadvantage/pao_customer.htm
l
Figure 4-1 lists the most common regulations that we describe in this chapter to
provide you a framework of understanding how the Guardium functionality can
help you achieve regulatory compliance for DAM.
SQL>
The input information flows from client to the database, is processed, then the
result is returned to the client, as shown in Figure 4-2.
In this example, the SELECT statement is considered the SQL query or SQL
access, as shown in the following example:
select CARDID,FIRSTNAME, LASTNAME, CARDNUMBER from creditcard where CARDID=3;
This is how the user or application interacts with the database to get the
information that wanted.
The information that is returned from the database is called the “result set”, as
shown in the following example:
CARDID FIRSTNAME LASTNAME CARDNUMBER
---------- --------------- ------------------------- -------------------------
3 Joe Jones 1234567890123453
In some cases, you want to audit this information to see who accessed sensitive
information, such as credit card data. Remember that this can generate much
information and you must plan accordingly for your storage requirements.
European Union Anyone that can view personal Oracle: System, sys
Directive information inside the DB2: db2admin, db2inst1
database. SQL Server: sa, administrator
informix: informix
This includes any SQL select sybase: sa
statements or indirect access Other: All other database
through procedures, functions, users that have select
views, and so on. capability on objects that
contain personal information
that is defined in the database
Privilege user monitoring is a key requirement for any database security policy
because it is these accounts that can be misused or hacked. When you have
access to these accounts, you want to ensure (through a verifiable audit trail) that
no activity from these database accounts violates your security policies.
The typical privilege user monitoring includes SQL queries and exceptions, but
not result set.
Almost every enterprise application that is written these days has a database
infrastructure with the application. It is critical to understand what tables and
views hold the sensitive information for these applications. For example, consider
an SAP application that stores credit card information. In this configuration, the
table but0cc should be included in the group of objects to be monitored because
it contains credit card information, as shown in Figure 4-4.
When these sensitive objects are monitored, this information can be accessed by
using one of the following methods:
Through the application (such as SAP)
Direct access from the database
It is important that you monitor these database objects from both access
methods to ensure that access to sensitive information has appropriate security
controls.
The good news is that there are automated tools that can help you identify where
your sensitive information is located, as shown in Figure 4-5 on page 128.
The typical sensitive objective monitoring includes SQL queries and exceptions,
but not result set.
Figure 4-6 also shows the following different levels of maturity as it relates to
database activity monitoring:
Basic
Proficient
Optimized
Figure 4-7 shows three types of VA categories. Each of these categories helps
identify potential issues that should be resolved through your audit processes.
Tests
1 • Permissions
DB Tier • Roles
(Oracle, SQL
Server, DB2, • Configurations
Informix, Sybase,
MySQL, Netezza, • Versions
Teradata)
Database • Custom tests
User Activity
OS Tier • Configuration files
3 (Windows, 2 • Environment variables
Solaris, AIX, HP-
UX, Linux, z/OS) • Registry settings
• Custom tests
Figure 4-7 Three types of vulnerability assessment categories
OS tier issues
OS issues can include permissions on files, environment variables, and
registry settings.
Figure 4-9 shows an example of someone changing permissions for a critical
file. Permission change from rwxr-xr-w, which means only the owner can
change the file, to rwxrwxrwx so that anyone can change the file.
“2.2 - Develop configuration standards for all system components. Assure that
these standards address all known security vulnerabilities and consistent with
industry accepted system hardening standards”.
For more information about setting the VA module, see 5.9, “Vulnerability
assessment setup” on page 191.
For more information about setting Entitlement Reports module, see 5.11,
“Entitlement reporting setup” on page 202.
“7.1 - Limit access to system components and cardholder data to only those
individuals whose job requires such access.”
The Guardium Data Level Access Control module provides the capability to
address this requirement and protects sensitive cardholder data by ending or
quarantining suspicious or malicious database activity while allowing valid
access to the same sensitive PCI data (which is available in distributed
supported platforms in version 9).
For more information about setting the Data Level Access Control module, see
5.8, “Data level access control” on page 189.
In many environments, users log in with their OS account and then switch to a
generic shell account that has the required privilege to access the database. You
can use the User Identification (UID) chain functionality to identify the privileged
users who use the generic OS accounts.
A similar issue is that the generic IDs are used from application server to access
the database. Guardium has support to identify users for major enterprise
applications and for custom applications.
A CAS agent can monitor files, the output from OS and SQL scripts, environment
variables, and Windows registry entries. Built-in templates are included for all
supported (distributed) platforms.
For more information about setting the CAS module, see 5.10, “Configuration
audit system setup” on page 191.
For more information about monitoring and data access level control setting, see
5.8, “Data level access control” on page 189.
Figure 5-1 shows a generic monitoring setup order that is applicable for most of
the database activity monitoring (DAM) deployments.
1
3
2
4
The volume of the activity that is logged in to internal repository often depends
on the volume of database activities and audit level requirements.
For more information about these audit levels, see 2.2.1, “Audit level” on
page 23.
A Guardium policy defines the required audit level. It is best to start with
restrictive policy and then open the policy gradually to include more audit
activities. For example, start from auditing session level activities, such as login
and logout information, then add access level rules (IPs, users, source programs,
and so on) to support privileged users audit, and then continue to command level
rules, object level, result sets, patterns, and so on. Other applications (such as
vulnerability assessment, configuration audit system, entitlement reports) can be
configured after the database activity monitoring is deployed (at least the initial
phase of it).
The monitoring planning session is an opportunity to get the team together and
start the project.
5.3 Grouping
Grouping is one of the basic tools in the Guardium solutions. By using this
feature, you can combine similar elements of the same type to use later in
queries, policies, reports, and so on. A list of financial servers, list of privileged
users, or list of sensitive objects are typical examples of groups that are used
throughout the Guardium solutions. The same set of groups is used by all
Guardium users and all Guardium applications. Also, in a federated environment,
all appliances that are managed by the same central manager share the set of
groups.
Each group has its own group type and might contain only members from the
same data type. For example, a group of weekdays contains weekdays, and a
group of server IPs contains data that is formed as an IP address. Data types are
predefined.
There is a special category of group types called tuples. A tuple allows multiple
attributes to be combined to form a single group member. Tuple groups include
the following examples:
Object/Command
Server IP/Server Port
Client IP/Source Program/DB User
Server IP/Instance Name/Port
Use the Group Builder tool to modify the content of the existing group or to create
a group. By using Group Builder, you can populate a group with members by
entering them manually or through other Guardium tools.
5.3.1 Wildcards
Group function supports the use of wildcards with group members. Table 5-1
shows examples of wildcards usage.
Multiple levels of hierarchy are allowed. You can add a hierarchical group to a
hierarchical group. Figure 5-4 shows that you can add child groups to parent
groups.
To make a hierarchical group usable by queries and policies, the group must be
flattened. Flattening populates parent groups with members of child groups. If the
child group is changed later, the parent group must be flattened again to pick up
the changes. Flattening is a process that can be run on-demand once or run
periodically by schedule.
You specify the public or private group type through the application type when
you are creating the group, as shown in Figure 5-6. For example, you might want
to have private groups that are created and used only in policies. If you select
Policy Builder in the drop-down menu for application type when you are creating
a group, this new group is visible only in Policy Builder and not in other
applications.
5.4 Policy
A policy is a set of rules and actions that are applied in real time to the traffic as it
is captured by Guardium collector. The policy defines what traffic is ignored or
logged, what activities require more granular logging, and what activities should
trigger alert or block access to the database.
The traffic is evaluated against policy rules sequentially until the rule fires (meets
criteria) and multiple actions can be taken upon rule firing. More rules of the
same type (access and exception) are not evaluated unless the fired rule has the
Continue condition set. All extrusion rules are evaluated whether one fires. If the
criteria that is set in the rules is not met for any of the rules, the access traffic
(SQLs) is logged in non-selective policy and not logged in selective policy.
Figure 5-7 shows the configuration panel in which the selective or the
non-selective policy type is defined.
The following types of policy rules are available (as shown in Figure 5-8 on
page 158):
Access rules are applied to the database traffic that comes from client to
server (accessing the database).
Exception rules are applied to database exceptions, such as failed logins and
SQL errors.
When you are setting up the policy rules, start from a policy that ignores all traffic
while you are working on the group population and policy rules definition. Such
policy logs all database logins and logouts information and allows the use of
session level reports to evaluate connection profiles. The policy has only one
access rule with no filters and features an action of “Ignore S-TAP session”.
You should evaluate what sessions are considered trusted and their traffic can be
ignored and not logged in to the collector. Use “Ignore S-TAP session” rule for
trusted traffic to significantly improve performance of the Guardium solution and
reduce network usage. The collector instructs S-TAP to stop sending traffic for
specific sessions if the rule with such action fires.
The following example shows setting up a policy rule that ignores traffic for
connections profiles that are identified as trusted applications. This rule is
applicable for non-selective and selective policies. In the Access Rule Definition,
select Trusted Connections for database access, as shown in Figure 5-9 on
page 159.
For action, use the Ignore S-TAP session option (as shown in Figure 5-10)
instead of the Ignore Session or Skip logging options. The Ignore Session option
causes the sniffer to ignore traffic on the collector but the S-TAP still sends the
traffic. The Skip logging option causes the sniffer to ignore traffic at SQL level.
The Ignore S-TAP session or Ignore session options often are used when the
filter criteria is at the session level. The Skip logging option is used when the filter
criteria is at the SQL level.
If evaluating result sets from the database or logging exceptions (except failed
logins) is not required, add the Ignore responses per session action to the policy
rule, as shown in Figure 5-11. If this action is applicable for a subset of sessions,
S-TAP does not send server-to-client traffic to the collector, which often is the
majority of the overall traffic.
By default, all of the SQLs are logged in to the Access Period entity with 60
minutes logging granularity setting for non-selective audit policy. Although you
can configure the logging granularity parameter, 60 minutes is the suggested
logging time. The logging granularity parameter is available in Inspection Engine
Configuration window, as shown in Figure 5-12.
To have “logging into the Access Period entity with granularity of 60 minutes”, use
the “Audit only” action for the selective policy, as shown in Figure 5-13.
Each record in the Access Period entity represents the number of times a
specific construct (SQL) ran within a specific session during the 60-minute time
frame. The parameters of the SQL statement are replaced with “?” in record.
For example, if session A is started at 14:30 and the select * from employee
where employee_id=10 SQL statement is run at 14:30, one record is written into
the Access Period entity that represents the select * from employee where
employee_id = ? construct for session A. This record is for the time frame
between 14:00:00 and 14:59:59 and the counter is 1.
If the SQL statement is run another 500 times before 16:00 in session A, only the
counter of the existing entry is updated to 501 (regardless of what parameter
values were used in the query).
With such logging (default and audit only action), the logged data is presented in
the Access Tracking domain and the Access Period entity.
Figure 5-14 shows the Access Tracking panel of the Report Builder.
Alternatively, you can log the individual SQL statements into the Policy Violation
domain by using the Log only action and present the record through Policy
Violation Tracking domain and the Policy Violation Rule entity, as shown in
Figure 5-17 and Figure 5-18 on page 163.
The Individual SQL statement is logged into the Policy Violation Tracking domain
and the Alert Tracking domain, as shown in Figure 5-20.
You can generate the report about the alerts from the Alert Tracking domain and
Message Text entity, as shown in Figure 5-22.
Figure 5-24 on page 166 shows an example of an extrusion rule for the results
set that is returned from the database to the privileged users. The “Log Full
Details” action is used to record detail information if the results set has data that
matches the credit card pattern and passes the Luhn algorithm validation. The
Luhn algorithm validation is activated by the name template of the Description
field in the policy rule. You also can use the “Log Extrusion Counter” action or the
“Log masked Extrusion Counter” action if the result sets do not have to be logged
or must be masked.
If logging all or part of the exceptions is not required, use the “Skip Logging”
action for the policy rule. Figure 5-26 on page 168 shows an exception rule to
skip logging all Oracle SQL errors.
The rest of the rules in the policy (or in another policy if multiple policies installed)
instruct the Guardium collector what actions should be taken on the traffic that
was sent by S-TAP. So, the actions that are used for distributed environment
(such as, Allow, Audit only, Log Full, or Masked Details) can and should be used
in the z/OS environment to support business requirements. Actions such as,
Ignore S-TAP session or Ignore Responses per session, are irrelevant to the
z/OS environment because the collector does not send the verdict back to S-TAP
in a z/OS environment.
Different from the distributed S-TAP, the z/OS S-TAP has other SQL-related
information already parsed (such as, Verb and Object) and sends this information
to the collector. The use of the parsed information directly saves the parsing
process of the collector or sniffer. If filtering or reporting at the Field level is not
required, you can use the “QUICK PARSE NATIVE” action in the policy (see
Figure 5-28) to instruct the collector to use the parsed information that is
provided by S-TAP directly. This can provide significant performance
improvement (mostly in analyzer component of the sniffer).
Internally, the policy rules, groups, and their members are copied into a set of
installed policy tables and this set of tables is used by the sniffer process. All
subsequent changes to the rules or groups do not affect the installed policy
tables until the policy is reinstalled. Because the policy rules use groups and the
group members are updated frequently in most deployments, it is the best to
schedule the policy installation daily at the end of the day after all manual
updates to the group members are complete and all processes to update groups
are run.
Figure 5-29 on page 171 shows the panel that is used to schedule the daily
policy installation.
There is an option to install multiple policies on the collector. When this option is
used, the rules are processed sequentially in a similar manner as though only
one policy is installed. Rules of the first policy are processed first then the rules
of second policy, and so on. If one of the rules fires and the Continue condition is
not set on that rule, the evaluation of the rules stops and no other rules are
evaluated regardless if these rules are in the same policy or in the next policies.
5.5 Reports
InfoSphere Guardium provides a significant number of pre-built reports that are
available for customers and are ready to use. If you have specific business needs
that call for specific reports, Guardium also provides a report generating tool that
you can use to customize the existing reports or to build new reports.
In this section, we describe the Guardium reporting mechanism and how the
reported data is stored internally on the appliance.
Frequently, the term report is used ambiguously to refer to queries and the report
components. To create an efficient report really means to create an efficient
query. In the following sections, we focus on query building.
Each domain comprises a number of entities that contain specific data of this
domain. For example, the Group domain is comprised of the Group entity, the
Group Type entity, and the Group Members entity; the Alert domain is comprised
of the Alerts entity, the Message Header entity, and the Message Text entity; and
the Policy Violation domain comprises the entities, such as Session, SQL, Policy
Rule, and more than 10 other entities that contain data that is important for policy
violations tracking.
Tip: You might think of the entities of tables and domains as groups of the
logically related tables.
Figure 5-33 on page 176 shows a list of available domains on the left side. The
entity list in the Access domain is shown in the middle of the figure.
3. Select attributes: Select the attributes (fields) from the list of available
attributes. You can drag the attributes into the upper portion of the pane to
add the attributes to the list of fields that the query retrieves.
The following optional flags at the top of the Query Fields pane (see
Figure 5-35 on page 178) can be applied to the query:
– Add Count: This option automatically adds another counter column to the
query. The counter shows how many times each unique combination of
the fields in the row occurred for an observed period.
– Add Distinct: This option displays only the unique combinations of field
values in the row. Each unique combination is displayed once only. This
produces a shorter report and might boost performance.
– Sort By Count: Use this option if you want the results sorted by the counter
values.
The Query builder supports a list of operators for query conditions. When you are
creating query conditions, you have a choice of selecting a fixed value for the
attribute in the condition or a runtime parameter. The use of a runtime parameter
gives an opportunity to run a report for different attribute values without changing
a query.
The following general considerations improve query performance when you are
building a query with conditions:
Use the In Group operator instead of several conditions that are connected by
the OR operator.
Use = instead of Like where possible.
Use Like instead of Like Group.
Use In Group instead of Like Group if possible.
Use Not In... Group with caution.
For more information about the available operators, see the Guardium online
Help.
To use this function, you must define users and roles in Guardium.
When you are defining roles in your organization, consider the answers to the
following questions:
Who should receive reports and what is the job function of each receiver (for
example, DBA, manager, and internal audit)?
Which users have the same job function and can provide an equivalent review
and sign off?
You can define roles by using the Access Management UI or the Guardium
predefined roles.
To create users, use the Access Management UI and assign an appropriate role
to the user.
To develop the workflow, define the audit process that includes the following
information:
Who receives the reports.
Which reports are delivered and how often.
The order in which the users or groups receive the reports.
If review or sign off is required.
If the delivery should stop at any user or role until they complete the required
action (review and sign off).
The audit workflow is maintained outside Guardium and the audit process
results are exported through CSV to external system, as shown in Figure 5-40
on page 182. The recommendation is not to have any receivers assigned to
the audit process and to ensure that the export CSV process is scheduled.
Such configuration ensures prompt purge of audit results from the appliance
(based on retention definition of each audit process).
Each alert type has its advantages and drawbacks that are important to
understand to use these alerts efficiently.
Real-time alerts
A policy rule associates with various types of actions. One group of actions is
used to generate alerts that are based on the rule conditions.
One group of actions is dedicated to alerting. All alert options work the same
way. The main difference among the actions is the notification frequency. For
instance, generate an alert on every occurrence or only once a day.
Real-time alerts often use less system resources; therefore, they are more
efficient. However, real-time alerts can fire only on the events that are related to
the observed traffic. If you must create an alert on multiple failed logins to the
Guardium appliance or on changes that are made in the Guardium configuration,
you must use the threshold alerts because a policy does not see events that are
related to the Guardium appliance.
Therefore, the rule is to use real-time alerts on all of the events that are related to
the audit data monitoring and to use threshold alerts for everything else.
You also can create a query of your own to use in an alert. To be included in alert
builder drop-down menu, a query must include a time stamp field and a numeric
field. If you cannot select a numeric field for your query when you are building
your query with the Query Builder, you can use the aggregation functions (MIN,
MAX, COUNT, and so on) with one of the fields or you use the report counter.
Figure 5-44 on page 187 shows a query example that satisfies both of the time
stamp and numeric fields requirements. This query features a time stamp field
and counter; therefore, it can be used to generate a threshold alert.
The following time-related fields are in Alert Builder, as shown in Figure 5-45 on
page 188:
Run frequency field: Indicates how frequently the query that is used in the
alert should run.
Accumulation interval field: Specifies a period (in minutes) for the query to
analyze. The time is counted from now backwards. For instance, 1440 in this
interval indicates that query should review data for the last 24 hours from the
moment the query runs.
Notification frequency field: Indicates how often the notification should be sent
to the receiver.
5.7.2 Alerter
When an alert message is generated by the threshold alert or by the real-time
alert, it is added to the message queue. The alerter then delivers the message to
a recipient.
You can use the alerter configuration window in the Administration Console (see
Figure 5-46 on page 189) to configure and activate the alerter process. The
window includes the following information:
Polling Interval defines how frequently the Alerter should check the queue for
new waiting messages. Use the default value of 60 seconds.
Define an access rule with the S-GATE terminate action to specify under what
condition the connection must be ended.
For more information about configuring VA, see 3.10, “Vulnerability assessment”
on page 104.
CAS also complements the Guardium VA module, that is, there are some VA
tests that use the data that is collected by CAS.
CAS highlights
CAS includes the following highlights:
Uses a lightweight agent on the database server to periodically run
predefined and custom tests. There is one agent per database server.
Requires Java 1.4.2_13 or higher on the database server.
Polls randomly (that is, not in real time) within a user-defined polling period for
changes to the tracked objects.
Can track changes to files or variables that are based on content (what
changed) or modification time stamp (when).
Can use MD5sum to detect content change, and if specified, report the new
and updated values.
5.10.1 Prerequisites
The following prerequisites must be satisfied before CAS is installed:
Java 1.4.2_13 or higher is installed on the database servers (CAS host)
CAS-module license is applied to the Guardium appliance
Install S-TAP installer or GIM bundle for the Windows platform, and CAS
installer or GIM bundle for UNIX and Linux (download from IBM Fix Central or
Passport Advantage)
Firewall ports are open between the database server and collectors; 16017
for all platforms (if TLS is used to encrypt the communication, open port
16019 instead)
Note: For more information, see the Configuration Auditing System and CAS
topics in the Assess and Harden chapter of the online Help Book Guardium.
Windows platform
CAS is automatically installed and enabled during the Windows S-TAP
installation by using the GIM. Therefore, if the database server already has an
S-TAP installed, it already has CAS. CAS is installed in a subdirectory of the
S-TAP installation directory.
However, the CAS agent runs as a separate service, Change Audit System.
Similar to the S-TAP, CAS agent uses the same guard_tap.ini file for some of
its configuration settings. As a result, CAS reports to the same collector
appliance as the S-TAP.
Note: CAS configuration parameters are not available from the GIM interface;
however, it is not typical to change the default settings.
Tip: To find the Java installation directory, use the find / -name java_vm
-print command.
After the CAS agent installation, verify the following CAS agent status:
The agent is started by use the ps -ef | grep cas command.
The agent is registered with the collector, as shown in Figure 5-48 on
page 197. If the CAS agent is not started, review the log files for more
information.
Tip: The predefined templates and tests are a good source of usage
information. Review them to get ideas about what is possible and the syntax.
Each template can contain one or more of the following types of tests:
OS script: Used to run a script, executable file, or OS-specific commands.
SQL query: Used to run SQL commands after the agent connects to the
database. Although available, use the VA query-based tests instead.
Environment or registry variable: Used to check changes to an environment
or registry variable. You can also use an OS script test to check these
variables.
Note: CAS is a 32-bit Java application, so it does not have access to the
64-bit Windows registry keys. On a 64-bit server, it accesses only the 32-bit
application keys that are stored in the WOW6432Node area; for example,
HKLM\Software\WOW6432Node. This is important to note when you are
creating your own custom test. The predefined tests already take this issue
into account.
File: Used to monitor file changes and file permissions. For predefined VA,
CAS-based tests allow the VA module to compare the reported file ownership
and permissions with what is expected.
File pattern: Used to specify file tests against a collection of files that match a
pattern; for example, monitor changes to any init.*ora files, where the
pattern is defined using regex.
Figure 5-48 shows the CAS status window on the collector appliance.
A CAS instance specifies a template set and any parameters that are required to
run the tests; for example, OS user or directory path. It is typical to deploy a mix
of system and database-specific templates to each host; for example, one CAS
instance to monitor OS items (added by default) and another CAS instance to
monitor databases.
You can use the CAS section of the data source form to specify the database
instance account and the installation directory to use for running the OS tests for
that specific database instance. For more information, see the Help Book
Guardium.
To view these reports, click Tap Monitor → CAS → Changes on the collector.
This pane lists the following reports:
CAS Change Details: Lists the changes that are observed for each monitored
item (test).
Note: The results (baseline) for all tests are reported from the first run.
However, on subsequent runs (determined by the period setting), only
changes from the baseline are reported.
CAS Saved Data: Lists the actual detected changed data value for those
monitored items (tests) with the Keep Data selected.
In addition to the predefined reports, custom queries and reports can be created
by using the CAS Changes Tracking domain and query builder.
For more information about the entitlement reports, see the “Database
Entitlement Reports” topic of the Appendices chapter of the online Help Book
Guardium.
Note: If you are using, or plan to use the VA module, you can use the VA
creation scripts and or the same database accounts to minimize the setup
effort.
Note: The gdent is prefix is used for the scripts that are used for
entitlement reporting; the gdmmonitor prefix is used for VA scripts. The VA
scripts includes the privileges that are required for entitlement reporting.
Note: Click the help icon ? to get a description of each custom table, or see
the “Database Entitlement Reports” topic of the Appendices chapter of the
online Help Book Guardium.
4. Select the entity and click Upload Data to configure the following settings:
– Overwrite or purge options.
– Assign data sources.
– Assign a schedule to periodically upload the entitlement data.
– Fetch records from source database catalog into Guardium.
Figure 5-53 on page 205 shows the selection of an entitlement reporting
entity for configuration.
5. (Optional) After these items are configured, click Run Once Now to load data
from the data sources so you can review the data.
Figure 5-54 on page 206 shows the configuration options for uploading
entitlement data.
Figure 5-55 shows the options for setting and scheduling the purge of a
custom table invoked from Purge on the Custom Table Builder window.
Figure 5-55 Options for setting and scheduling the purge of a custom table
As the Guardium Administrator, create a menu tab on your portal to stage the
reports or use the My New Reports tab, if it is available (that is, your account has
the admin and user roles). For more information, see 5.14, “Adding a menu tab to
your portal” on page 225.
Custom reports
Although you can create a custom query and report from scratch, it is easier to
clone the predefined query, make changes (for example, add columns or query
conditions) and generate a new report.
For more information, see the “Database Auto-discovery” topic in the Discover
chapter of the online Help Book Guardium.
Note: The database auto-discovery is not the same as the instance discovery.
The instance discovery uses a lightweight Java agent that is deployed with the
GIM client on the database servers to check for new database instances.
5. To check on the progress of a scan and probe, select the process and click
Progress/Summary, as shown in Figure 5-60 on page 212.
In addition to the predefined report, a custom query and report can be created by
using the Auto-discovery Tracking domain and query builder. For example, create
a custom query with conditions to exclude known databases from the results by
using Guardium groups.
Figure 5-62 shows the domain to use for creating an auto-discovery custom
query.
Note: For more information, see the Classification topics in the Discover
chapter of the online Help Book Guardium.
5.13.2 Prerequisites
The following prerequisites should be met for using classifier:
Database and Sensitive Data Finder module license is applied to the
Guardium appliance.
DBA creates a database account for the databases to be scanned. This
account must have the SELECT access on the database objects you plan on
scanning; for example, tables, views, catalogs.
Note: Guardium does not provide an account creation script. This account
should have a strong password and, if possible, should be disabled after it
is used.
List of criteria or type of data to search for and what patterns to use. It is
preferable to be familiar with regex.
This policy can be independent of a database type and often targets a certain
data domain; for example, sensitive data, privileged commands, or object
permissions.
2. Click Edit Rule and then Add Rule to add a search rule, as shown in
Figure 5-66.
6. Add the actions to take if the rule is satisfied; that is, data is found that
matches the search pattern. Click New Action, which opens the Action
window.
This example uses the Add to Group of Objects action which includes the
following tasks:
– Add the names of the tables to the group that is specified in the Object
Group field.
– The Actual Member Content field specifies how the table name should be
added. In this example, it is prefixed with a wildcard; for example,
%cc_card, where cc_card is the table.
7. You can add more rules to this classification if necessary; for example, rules
to look for credit card numbers that might include the embedded dashes.
For more information about the Comprehensive Search and Sample size
settings, see to the “Classification Process” topic in the Discover chapter of the
online Help Book Guardium.
Figure 5-70 on page 221 shows the classification process with an Oracle data
source.
On-demand running
To run a classification process now, complete the following steps:
1. Select Tools → Config & Control → Classification Process Builder. In the
Classification Process Finder, select the process and click Run Once Now,
as shown in Figure 5-71.
Scheduled running
To schedule the running of a classification process, use the Audit Process
Builder and add a Classification Process task. For more information, see 5.6,
“Compliance workflow” on page 179.
Predefined reports
After the job completes as per the Guardium Job Queue report, click View
Results on the Classification Process Finder window to view the output, as
shown in Figure 5-71 on page 221.
The predefined classification report shows the process log and details of the
search, as shown in Figure 5-73 on page 223.
When the results are viewed, in addition to the action that is defined in the policy,
you have the option of starting an ad hoc action; for example, send an Alert.
In addition to the process report, verify the results of the actions if any were
specified. In this example, objects (tables) that are found to contain credit card
numbers were to be added to the Sensitive Objects group.
Click Tools → Config & Control → Group Builder, then select the group and
verify that the tables reported were added, as shown in Figure 5-74 on page 224.
Custom reports
In addition to the predefined report, custom queries and reports can be created
by using the Classifier Results Tracking domain and query builder.
A menu tab is a tab or page that is divided into two vertical sections: a menu
section down the left side and a display section on the right. Examples of a menu
tab are the Guardium Monitor tab or the My New Reports tab. A menu tab is a
convenient layout for a list of reports.
For more information, see the “Customize the Portal” topic in the Common Tools
chapter of the online Help Book Guardium.
Figure 5-75 on page 226 shows the steps for adding a menu tab to a portal.
Some (that is, not all) roles have a unique panel layout or portal; for example, the
portal for the accessmgr role is different from that for the admin role, as shown in
Figure 6-1 on page 229.
Note: On the first login, the user’s portal is generated based on their assigned
role and associated with the account. Users that are assigned multiple roles
have a portal that combines the portals for each role (if there is a portal that is
associated with each role) for example, user and admin.
The following key predefined users (account) are available on each Guardium
appliance:
admin: Used for system administration by using the Guardium GUI
accessmgr: Used to manage user accounts by using the Guardium GUI
cli: Used for system administration (command-line interface only)
Note: The admin, accessmgr, and cli can refer to a user or a role, depending
on the context.
For more information, see the Access Management chapter of the Help Book
Guardium.
Figure 6-1shows the portals for the three key Guardium roles.
Note: External and internal authentications are mutually exclusive, except for
the three default accounts: admin, accessmgr, and cli, which are locally
authenticated. This allows access to the appliance if the LDAP or RADIUS
servers are unreachable.
Configuring authentication
The authentication configuration is performed by the Guardium administrator by
using the Administration Console tab and clicking Configuration → Portal →
Authentication Configuration.
For more information, see the “Portal Configuration” topic of the Guardium
Administration chapter of the Help Book Guardium.
A typical use case allows users to view only data that is collected from databases
for which they are responsible.
Tip: It is important to enlist the help of your LDAP administrator to provide the
inputs and syntax for the import process.
For more information, see the “Import Users from LDAP” topic in the Access
Management chapter of the Help Book Guardium.
Accounts do not have to be imported from LDAP to use LDAP authentication. For
example, manually create the user account in Guardium (for authorization),
matching their LDAP account but with a bogus password. When that user then
attempts to log in, they provide their LDAP password (and not the bogus
password) and are authenticated against LDAP.
Figure 6-6 on page 235 shows the default and sample roles the use the User
Role Browser.
For example, there are four accounts, joe, bob, mary, and jane, who are all
assigned the user role.
However, joe and mary are DBA managers; whereas bob and jane are DBAs.
Bob and jane can be added to the dba role so they can share reports with each
other or anyone else with the dba role.
Joe and mary are added to the dba_mgmt role so they can receive reports that
are distributed by the Guardium Audit process to the dba_mgmt role.
All four users have two roles: user and dba or dba_mgmt.
To add a custom role (for example dba_mgmt) to an existing account, follow the
steps that are described in 6.2.3, “Modifying a user’s role” on page 233, except
that the user now has two roles that are selected: user and dba_mgmt.
Configuring and interpreting these built-in tools means little if you do not know
what corrective steps can be taken when issues are identified. Therefore, this
section devotes considerable time describing the various strategies that are
available to get things running smoothly.
Although there are more fields that can be added to an S-TAP Statistics report,
the sample that is shown in Figure 7-1 includes the most commonly used
parameters. The following fields are featured:
Timestamp: Indicates when the record was created.
Software Tap Host: The host system where data is collected.
Total Bytes Processed so Far: The number of bytes that are logged by K-TAP.
Total Buffer Init: The number of times the K-TAP buffer was reinitialized.
Reinitializing the K-TAP buffer might be required if the contents become
corrupted.
System CPU Percent: Total CPU usage percentage on the host system for all
processes.
S-TAP CPU Percent: Total S-TAP CPU utilization on the host system. This
value represents the overall S-TAP CPU utilization. For example, if the system
has 10 cores, and S-TAP is using 30% of one, the overall S-TAP CPU usage
is about 3%. The maximum CPU S-TAP can ever use on a server is 100% of
one core because the guard_stap process is single threaded.
Buffer Recycled: The number of times the S-TAP buffer overflowed.
The following fields should be used when you are troubleshooting specific issues
and only after the values are reset from the S-TAP side:
Total Bytes Dropped so Far: Total number of bytes that are dropped by K-TAP.
This value should be taken as a delta between two given points. If this number
consistently grows, there might be insufficient resources on the host system
for S-TAP to read data from the K-TAP buffer quickly enough.
Total Bytes Ignored: Total number of bytes that are ignored by K-TAP as a
result of any IGNORE STAP SESSION rules that might be implemented.
Total Response Bytes Ignored: Total number of bytes that are ignored by
K-TAP as a result of any IGNORE RESPONSES PER SESSION rules that
might be implemented.
Real-time values
The following real-time value fields are available:
System CPU Percent
The System CPU Percent field shows the CPU usage by all processes on the
host database server. It is useful for showing how busy the host server is
overall.
S-TAP CPU Percent
The S-TAP CPU Percent field shows the overall CPU usage of S-TAP for the
entire system. It is calculated by using the pcpu option from the ps command.
S-TAP CPU usage might be indicating an issue in the following cases:
– Usage is consistently at or near 100%. Such a condition might indicate
that the guard_stap process is stuck in a loop and using all of the
resources on one core. Run the guard_diag command when you
encounter such cases.
Cumulative values
The following cumulative value fields are available:
Total Bytes Processed so Far
The Total Bytes Processed so Far value indicates the total number of bytes
that were processed by K-TAP since the last reset of these values. This
means that to come to any meaningful conclusions with this data, you must
reset the values first. The values can be reset only directly from the database
server by running the following command:
<S-TAP Shell Install
Directory>/guard_stap/ktap/current/guard_ktap_stat reset
or
<S-TAP GIM Install Directory>/modules/KTAP/current/guard_ktap_stat
reset
Note: The value in Total Bytes Processes so Far rolls back over to 0 after
reaching 4294967296 bytes (2^32). Therefore, if it was reset for some time,
the value that is displayed might be a value that was rolled over several
times.
On its own, there is little that can be learned from looking at only the total
bytes processed value. Its delta over time can be used to estimate the volume
of traffic that is processed by S-TAP if K-TAP is the only driver that is used to
intercept traffic. For this purpose, it is not necessary to first reset the counter.
Total Bytes Processed is most helpful when it is used as baseline for some of
the other statistics that are described next.
unix_domain_socket_marker
The unix_domain_socket_marker parameter in the Inspection Engines
configuration is used to configure the domain sockets (interprocess
communication socket) for Oracle, MySQL, and PostgreSQL databases. In
most case, the default setting of NULL properly collects all IPC traffic.
However, in cases such as Oracle RAC environments, leaving the
unix_domain_socket_marker setting to NULL forces S-TAP to monitor
node-to-node traffic and impose an unnecessary performance penalty. In
such environments, the unix_domain_socket_marker should be set to the KEY
value of the IPC that is defined in the tnsnames.ora file.
intercept_types parameter
The intercept_types parameter in the Inspection Engines is used to define
the protocols that should be intercepted by S-TAP. The default setting is
NULL, which captures all supported protocols. In some cases, this parameter
is useful for determining whether performance issues are caused by the
volume of specific traffic that uses a certain protocol. Figure 7-4 on page 250
shows the various configuration options for the intercept_types parameter.
participate_in_load_balancing
The participate_in_load_balancing parameter allows you to balance the
traffic that is intercepted by S-TAP across two or more appliances. Although
this is less important for S-TAP performance, it can help resolve performance
problems on the collectors by splitting the load.
Native S-TAP load balancing splits the traffic by database session, which
sends each new session to a different appliance in the pool. Guardium also
supports other load balancing methods that employ Cisco Global Site
Selector (GSS) or an F5 Load Balancer. These methods are used to help
distribute many S-TAPs across available collector resources, which is different
from the session-based load balancing that is native to S-TAP.
In small environments of only a few collectors, you can monitor the Inspection
Core performance by using the Buffer Usage Monitor report, as shown in
Figure 7-5. In this chapter, we describe the most important parameters in this
report.
For medium to large Guardium environments, the best way to monitor Inspection
Core performance across the environment is by using the Operational
Dashboard functionality that was introduced in V9. The Operational Dashboard is
accessible only from the Central Manager, as shown in Figure 7-6 on page 252.
The Operational Dashboard provides simple Low, Medium, and High utilization
levels for each of the collectors in the environment. It calculates the utilization
level that is based on several parameters in the Buffer Usage Monitor report data
that is downloaded from the collectors. The Operational Dashboard is meant to
provide a quick indication of potential performance issues across the estate,
while the Buffer Usage Monitor report is still used to identify specific issues.
The Buffer Usage Monitor report consists of 47 or more columns, most of which
you might never use in day-to-day monitoring of the appliance. One of the first
things you should do is to create a simpler version of this report that contains the
following columns:
Timestamp: Shows when the data was collected.
% CPU Sniffer: Shows a normalized representation of sniffer CPU usage. For
example, 50% sniffer usage on an 8-core appliance means that the sniffer is
using 400% CPU (4 cores).
% CPU Mysql: Shows a normalized representation of MySQL CPU usage.
% Memory Mysql: Shows the percentage of total system memory that is used
by the MySQL database.
Free Buffer Space: The percentage of free sniffer engine buffer space. The
sniffer buffer engine is only used in implementations that use SPAN ports,
Network TAPs, or S-TAP pcap. If the native S-TAP drivers are used, this value
should always remain at 100%.
To create the simplified Buffer Usage Monitor report, use the Sniffer Buffer Usage
Tracking domain. An example of the simplified report is shown in Figure 7-7.
During this spike in traffic, the Analyzer must start buffering large amounts of
data, as shown by the increasing values in the Analyzer Queue Length. At
approximately 8:48 a.m., the Analyzer/Parser buffers are full, and the sniffer
begins to drop data, as shown in the Analyzer Lost Packets column. This kind of
performance issue does not appear to be an isolated incident on this machine, as
indicated by the large number of lost packets that existed before this event.
After the sniffer allocates memory, it does not release it even if the logger queue
recovers. Therefore, it is possible to have a high sniffer memory usage even if the
logger queues are not holding any data.
Sniffer restarts because of logger queue overflow is also shown in the collector’s
syslog file (/var/log/messages). These messages come in two varieties. The first
The second type of restart because of logger queue overflow happens when the
Guardium “nanny” process, which monitors sniffer memory usage, detects that
the sniffer is dangerously close to the 2.5 GB limit and restarts it, as shown in
Figure 7-11.
Usually, both types of restarts are caused by the same issues, the only difference
being the speed at which the sniffer memory grows. Memory allocation problems
happen when the sniffer memory grows quickly before the nanny process can
react.
The overall utilization level for any collector is equivalent to the highest level of
the parameters measured. For example, if the number of Sniffer restarts tests as
High while all other parameters test Low, the overall unit utilization is marked as
High for the report.
By using the Upload Data menu, you can schedule the upload of Buffer
Usage Monitor from the managed units and run the process immediately.
When the operation completes, it displays the number of rows that are
uploaded from each managed unit, as shown in Figure 7-13 on page 259.
3. The Low, Medium, and High utilization levels are established against a
predefined but configurable set of thresholds, as shown in Figure 7-17 on
page 262. The predefined thresholds should be adequate in most cases.
Nevertheless, you might fine-tune the predefined parameters by
double-clicking a particular threshold and starting the
update_utilization_thresholds menu. Threshold 1 defines the level at
which a particular parameter goes from Low to Medium. Threshold 2 defines
the level at which a particular parameter is at High utilization.
4. To clear the Unit Utilization data, open the Unit Utilization Distribution window
as shown in Figure 7-18. Double-click the start icon in the report, and select
reset_unit_utilization_data from the menu.
To identify the reason for high utilization on the collector, double-click the host
and examine the Unit Utilization Report, as shown in Figure 7-21.
Also shown in Figure 7-22 is a second row in which the collector reached a High
level of utilization. Once again, the reason is a high number of requests during
that hour reaching over 6000000 and exceeding Threshold 2. Although the
collector utilization is low during all other times, the Overall Utilization Level is
always equal to the highest level in the Unit Utilization report.
Though this report results in overall High Utilization level, it might not necessarily
be indicative of an issue. Because the machine reaches this level for only one
hour in the test period, this is most likely an isolated event that can be ignored. If
such events happen everyday and generate false alarms, it might make sense to
modify the default thresholds from the Utilization Thresholds window that is
shown in Figure 7-17 on page 262.
The term report is frequently used ambiguously to describe reports and queries.
Queries are the definitions of what and how should be retrieved and reports are
definitions of how the query results should be displayed. When we describe the
performance of the reports, we are referring to the performance of the queries.
Each query that is associated with a particular predefined set of data is a called
data domain. For example, Access domain for captured traffic, Exception domain
for captured errors from the database server, or Guardium activity domain to
monitor activities that are performed by Guardium users. There are
approximately 40 different domains on the Guardium appliance today.
To better understand how query builder works, you should have a more detailed
understanding about how captured data is stored in the underlying tables
(entities).
Guardium monitors and captures numerous details about database user activity.
All of this information can be put into the following major categories:
Who: Describes a connection to a database, who made a connection, and
when the connection was made.
What: Contains the SQL statements that were run on the database.
In addition to login information, Guardium captures the SQL statements that are
issued by the user or an application. The SQL statements are recorded in SQL
entity. To create queries with conditions on specific groups of tables or sets of
commands, Guardium parses captured SQLs to commands, objects, and fields
and places this information in three other entities: Commands, Objects, and
Fields.
When you create a query, you enter a query name first then you select a main
entity. It is important to select a Main Entity that tells query builder the focal point
for the new report and how to construct a query. Ultimately, it might also affect
query performance.
Figure 7-24 and Figure 7-25 on page 269 show two queries that have identical
fields that are selected. The only difference between these two queries is the
main entity that is selected, one is Session (the better choice) and the other is
Command.
The following query is generated by the query builder with the Session main
entity:
select ... from GDM_ACCESS, GDM_SESSION where....
The following query is generated by the query builder with the Command main
entity:
select ... from GDM_ACCESS, GDM_SESSION, GDM_CONSTRUCT_INSTANCE,
GDM_SENTENCE where...
Both queries have the same columns. However, the first query joins two tables to
produce the results and the second query has four tables that participated. The
second query takes longer to complete. Even more important, most likely there
are more records in a second report and some of the rows appear multiple times.
When Command is selected as the main entity, the report generator defines the
report with the focus on “command”. There are most likely many commands in a
session and each command appears on the report in a separate row, even if you
do not have a command that is displayed on the report.
Main entities are organized hierarchically from high-level details to more granular.
Thus, the main entity defines the level of details in the report. Selecting a main
entity on too high a level in the list might limit your ability to select fields to report.
An example is a single SQL statement with multiple fields. If you select SQL as a
main entity, your level of detail is an SQL statement and each line in the report is
dedicated to one SQL statement. This means that you cannot display fields in the
same line because there is no space for multiple fields.
When you are designing a new query, consider the relationships between entities
to avoid data redundancy in reports. Figure 7-27 shows another example with
Field as the main entity and a few columns selected.
The report shows the same line repeatedly because selecting Field as the main
entity instructs the report generator to dedicate one line in the report to a field. If
the Field attribute is added to report, we can see that after we select the Field
domain as a main entity, each line of the report is associated with one Field and
the rest of information is repeated as needed, as shown in Figure 7-29.
In general, the data volume that is stored on the appliance is the major factor that
can affect the report performance. When you tune the report performance,
consider the following points:
Define the purge process to run nightly.
Configure the data retention period to the minimum that is allowed by your
business requirements.
Record Full SQL only when it is necessary (for example, when monitoring
sensitive objects or when monitoring privileged users). Full SQL tables can
add data volume quickly.
Reduce the period of the report to have a positive effect on the report run
time.
Analyze MySQL database performance once a month to update the index
cardinality by clicking Perform Maintenance Actions → TURBINE analyze.
By reducing the merging days, you reduce the data volume that query works with
and improve query performance. (This is true only for interactive reports.) If you
run reports in the background by using the ad hoc option, the background report
process creates its own set of merged data to match the report period.
Background reports run the same way as Audit Process reports.
Audit process is a mechanism that allows you to submit a group of tasks to run
asynchronously, on a predefined schedule, or on demand and forward the run
results to a group of predefined receivers automatically.
There are a few factors that can influence the performance of the audit process.
First, all of the considerations that are described in 7.1.3, “Interactive reports” on
page 266 about how to build efficient queries are also applicable to the audit
report process. These considerations become even more important because the
reports that are running as tasks in the audit process typically deal with large
data volume. However, there are a few significant differences in the way the audit
process reports are run.
The results of the interactive reports are kept in the memory and last only while
they are displayed on the monitor. When you move to another page, the results
are gone. If you want to return to the report results, you must rerun the report.
The results of the audit process reports are not deleted at the end of the run and
are stored in the internal database for later view, as needed. Each line of the
audit process report is stored as a row in the table in the internal database. This
table can grow large quickly and might eventually fill up all available space in the
database and affect system performance. An internal process is built in to keep
the table size under control. This process is based on some simple rules that
users must follow to make sure that the old data is deleted in time and the report
result table does not grow too large.
First, configure how long you want to keep your results under the Audit Process
Definition, as shown in Figure 7-30 on page 274.
The default setting of five runs means that only the last five runs of the report
results are kept and when you run the report again, the new results are recorded
and the oldest run results are deleted. You can use number of days (for example,
30 days) instead of number of runs. Configure this parameter to a value that
meets your business needs but not too big to help keeping the report results
table size down.
The second setting is related to the designated result receivers and their required
action. The report results must be reviewed (and optionally signed) by receivers
before they can be purged. Figure 7-31 shows that the results are sent to admin
user for review.
Note: Unsigned or not reviewed report results can fill up the database and
affect report performance.
Some customers use the audit process reports to transfer data to other systems
in their environment for integration and reporting purposes. These types of
reports tend to be large with hundreds of thousands of rows in a single report
and can take some time to complete. To improve the performance of these
reports, avoid the use of ORDER BY, GROUP BY, or HAVING statements
wherever possible. In particular, the ORDER BY statement might affect
performance dramatically on large reports (over 100000 rows).
Audit process can run on the collector or aggregator on any appliance. However,
audit processes are run on aggregators typically.
On aggregators, the audit process does not run on the default database but on
the ad hoc database subset that is created simultaneously. This database subset
includes only data from days that are required by the report. This significantly
reduces the data volume that the audit process reports work with and reduces
report run time.
Data that is used by reports is transferred from collectors by the daily import and
merge process. Reports should be run after the daily import and merge process
so the report can use the most current data.
Typically, the import and merge process runs shortly after midnight. Allow
enough time for the import and merge process to complete and schedule the
audit processes to start at early morning hours to avoid process congestion.
You can have a full backup or a “snapshot” of the Guardium server. In virtualized
environments, a backup also can be done by making an actual snapshot of the
Guardium machine. Restoring data from the Guardium system backup replaces
all existing data that is stored on the appliance with the data from the backup file.
As a result, all activity that is collected after the last backup is lost. For more
information about the granular (day-by-day) restoration of the data, see
Chapter 8, “Disaster recovery” on page 313.
In general, the largest component of the backup is data that directly correlates to
the MySQL database usage. The higher the MySQL database usage (% used),
the larger a system backup file size is.
One method to check the MySQL database and /var partition usage is through
the Buffer Usage report, as shown in Figure 7-32.
Alternatively, you can check the space utilization through System View, as shown
in Figure 7-33 on page 277.
To verify that the backup completed successfully, complete the following steps to
check the Aggregation archive log:
1. Log in to the Guardium graphic user interface (GUI) as Admin.
2. Select Guardium Monitor → Aggregation/archive Log.
3. Sort by “backup” type.
The log file is also accessible through the fileserver utility. The corresponding
log file is turbine_backup.log.
The purge operation should be done on regular basis to free up space and
speedup access operations on the internal database. The default purge value is
60 days. By default, purge is scheduled at 5:00 a.m. daily.
The amount of data that can be stored on the appliance depends on many
criteria, including appliance type, disk space, and policy. The purge period must
be adjusted to reflect the optimal balance between data accessibility and quick
response time of the system process. In a typical implementation, best practices
are to keep the data for 30 days or less.
You can reset the purge period through Data Archive and Data Export, as shown
in Figure 7-34 on page 278.
There is no warning when you purge data that was not archived or exported. The
purge operation does not purge restored data whose age is within the “do not
purge restored data” time frame that is specified on a restore operation window.
To verify that the purge is running on your system and to check its completion
status, click Aggregation/Archive Log under the Guardium Monitor tab in
Guardium GUI, as shown in Figure 7-35.
The smooth and efficient operation of a Central Manager unit is critical to the
overall Guardium system performance. In this section, we describe some
considerations about Central Manager efficiency and maintenance.
Guardium definitions
Central Manager houses most of the definitions of all of the units that report to it.
When users submit any report, query, or audit process on any managed unit in a
federated (centrally managed) environment, definitions of this activity are
retrieved directly from Central Manager. Therefore, latency between Central
Manager and its managed units can be a contributing factor for potential user
interface slowness on the corresponding managed units.
Users can use Central Manager or any of its managed units to modify those
definitions. Regardless of the appliance where the definition changes were
made, updated content (with an exception of Policies and Groups) is immediately
available on all the appliances across the federated environment.
You can deploy a policy by using the installation policy option by clicking Data
Management → Central Manager → Central Manager, as shown in
Figure 7-37.
Guardium Version 9.0 patch 50 and later increase the memory threshold for
Tomcat server that helps increasing virtual memory capacity of the Central
Manager appliance to handle larger result sets. This is especially important for
the 64-bit version.
Bug fixes, which are committed into the current version, are always merged into
all future versions. Depending on the critical nature of the issues, some fixes also
are back-ported into supported downward versions. Back porting decisions are
typically made on case-by-case basis. Because not all the changes are
automatically back ported, it is important to keep up with latest versions and
Guardium Patch Update (GPU) patching.
Guardium software changes within the same major version that are delivered by
one of two packaging methods: Ad hoc patches or GPU fix packs. Though the
content and purpose of those two packages are different, they are produced by
using the same Guardium patching mechanism.
All Guardium patches are encrypted and can be applied for corresponding
version or Guardium software only.
Ad hoc patches
Ad hoc patches provide temporary relief to the customers with urgent, critical, or
prevalent Guardium software issues. In addition, ad hoc patches are used to
address configuration and data manipulation requests for specific customer
needs.
Ad hoc patches contain only the modules that fix a specific customer issue. Ad
hoc patches often depend on the latest GPU version of the major code version.
Modules, which are included in ad hoc packages, contain all of the modifications
that were previously made to the same module within the same version.
The distribution vehicle that is most often used for the ad hoc patches is Support
PMR systems. Ad hoc patches that are uploaded to the corresponding PMR
automatically send email notification to the customers.
GPU patches
GPU is a cumulative fix pack that contains all the software modifications,
database changes, and security updates that are committed into the
corresponding version since the GA release version. GPU must be installed on
the corresponding main version of Guardium. For example, GPU V9.0p50 can be
installed only on V9.0 appliance. However, within the version, any latest GPU
patch can be installed on the top of any Guardium patch level.
In GPU patches, database changes and operating system upgrades are kept to a
minimum to keep GPU upgrades relatively quick and an easy way to update
Guardium software.
Guardium GPU fix packs are released to IBM Fix Central quarterly. To benefit
from latest version of Guardium software, install GPU patches in timely manner.
Fix Central contains the latest versions of Guardium Agents (such as S-TAP,
Configuration Audit systems, Guardium Installation Manager), GPU fix packs,
Database Protection Knowledgebase Subscription (DPS) updates, and
Appliance upgrade bundles. Those changes often become available a few
months after the GA release. When new versions of a software component are
released to Fix Central, the version in Passport Advantage of the same
component becomes obsolete. Therefore, it is important to always download the
latest versions of Guardium software directly from Fix Central, which is available
at this website:
http://www-933.ibm.com/support/fixcentral/
Note: Fixes for certain Guardium components are not released to Fix Central.
These fixes include customer software licenses, appliance ISO images,
upgrade patches, and release documentation. These components are
available from the Passport Advantage at IBM Passport Advantage site, which
is available at this website:
http://www-01.ibm.com/software/lotus/passportadvantage/
Before you install patches or distribute patches to the managed unites through
Central Manager, patches must be uploaded to the Central Manager appliance.
You can upload patches to the Guardium system by using one the following
methods:
The fileserver utility
Central Manager distribution
CLI patch installation command
You also can distribute the patches through the Guardium Patch Distribution
function in the Administration Console by clicking Administration Console →
Central Management → Central Management.
Guardium can distribute patches to the target individual appliance and all
managed units simultaneously. However, for a Guardium environment that has
large number of appliances, distribution management can be a challenging task
and requires a more systematic approach. In a large enterprise environment,
appliances and managed units can be grouped by geographical or
business-related areas by using the Guardium Group Setup in Central Manager.
A managed unit can belong to multiple groups. The default ALL Units group on
the Central Manager contains all of the managed units.
An example is a customer with three major divisions. Each division has separate
operational schedules and maintenance time frames. Guardium agents are on
Linux and z/OS systems that are in two different geographic regions. To
accommodate all aspects into the enterprise solution, appliance grouping can
include considerations for associated division, region, and agent operational
system.
To create a group, click Group Setup at the bottom of the Central Management
window, as shown in Figure 7-41 on page 286. Enter group name and click Add.
To assign the new unit to a group, on Central Management window, select the
managed units that you want to assign and click Group Setup to open the Group
Setup window. A list of the selected units and groups is displayed. Select the
group the units that you want to add and click Update groups, as shown in
Figure 7-42.
By using the patch distribution function, you can apply remote installation
instantaneously by using the Run Once Now option or scheduling it for a later
time.
Note: The show system patch available command displays patches that are
uploaded by using the fileserver utility or through Central Manager
distribution only. If the patch is uploaded by using a different method (directly
from CLI command), run the store system patch install command to see
and install the patch.
Note: Make sure to close the fileserver command after you upload the
patch, as shown in Figure 7-39 on page 284. If file server is not closed
properly or timed out during the uploading process, the patch might not be
visible by running the show system patch available command.
To install the uploaded patches, run the store system patch install sys now
command, as shown in Figure 7-44.
To upload and install patch directly from CLI without pre-uploading, run the store
system patch install command with the file transfer flags. This command
includes SCP and FTP patch transfer methods and provides interfaces to upload
patches directly from CD and DVD.
Figure 7-45 shows an example of uploading the Pre-GPU Health Check patch
9.0p 9997 to our lab collector by using SCP transfer method. The CLI allows
users to abbreviate words if it identifies a unique value. In this example, “install”
was abbreviated as “in”.
Figure 7-45 Beginning of the store system patch installed scp command
Upon successful file transfer, the user is prompted to install the uploaded patch
to the current system.
MustGather procedures
MustGather includes procedures for gathering specific areas of diagnostic
information. The MustGather of Version v9.0p50 contains the following diagnostic
functions:
agg_issues: For issues that are related to aggregation processes
alert_issues: For issues that are related to alerts
app_issues: For issues that are related to GUI
audit_issues: For issues that are related to audit processes
backup_issues: For issues that are related to the backup process
cm_issues: For issues that are related to central management functionality
miss_dbuser_prog_issues: For issues that are related to missing database
user and source programs
purge_issues: For issues that are related to purge process
scheduler_issues: For issues that are related to scheduler functionality
sniffer_issues: For issues that are related to sniffer functionality
system_db_info: For issues that are related to appliance space and the
performance of databases and operating systems
You can run MustGather commands from the CLI. Figure 7-47 shows running the
support must_gather sniffer_issues command.
When the MustGather command is run through CLI, the MustGather command
displays an output files location. The default location of the output files is the
./must_gather/<issue>_logs directory.
Part of the MustGather output is an archived file (with a .tgz extension), which
includes all of the collected diagnostic results. This file can be downloaded easily
and analyzed by experienced Guardium users and administrators through the
fileserver utility. If the issue requires further review by the Guardium support
team, upload this .tgz file to the RETAIN® (PMR) system.
In Figure 7-47, you can see that the sniffer_issues MustGather produces the
sniffer_20130531.tgz archive file.
Figure 7-48 on page 291 shows the fileserver view of the sniffer_issues
MustGather output, including the .tgz file.
The .tgz file includes diagnostic files, logs, and reports that are relevant to the
sniffer operation and the generic information about the health of the appliance,
operating system (system_output.txt) and onboard MySQL database
(db_output.txt).
Figure 7-49 shows the content of the sniffer_issues MustGather output .tgz
file.
The database diagnostic file db_output.txt contains the size and total usage of
the database, currently running queries, and a list of the most often used tables.
Note: MustGather collects log data for the last two days only. Therefore, it is
important to run the diagnostic gathering procedure during or right after the
issue occurrence.
By using the Support Information Gathering window, users can collect the
diagnostic data and send an email with the output directly to a specified recipient.
To use this option, the user must choose an email address in the “email” field.
GUI users can also schedule diagnostic gathering to run at a specific preset date
and time. This option should simplify collecting diagnostic information during off
hours and weekends.
By using the GUI MustGather window, uses also can submit several diagnostic
commands together. (The diagnostics procedures are run sequentially.) All
output files are sent to the email recipient upon completion of entire process.
The email includes the output packages of all submitted diagnostic procedures
and optional other parameters, such as PMR number and Issue Description.
To download any of the MustGather outputs, click the corresponding diskette line
and then click Save in the pop-up window.
You also can start MustGather remotely from Central Manager appliance. This
option allows you to submit or schedule diagnostic testing remotely on single or
multiple managed units simultaneously, as shown in Figure 7-54 on page 296.
The guard_diag command is SQL Guard version independent. It does not require
S-TAP to run or even to be installed on the system. However, the diagnostic
information that is collected without active S-TAP is limited to corresponding
system information only.
By default, the script output is placed in the /tmp directory. You can specify an
output directory as an argument. All collected data is combined into a single .tar
file, as shown in the following example:
/var/tmp/diag.<hostname>.<date>.tar.gz
Typical output of the guard_diag utility contains the following Guardium and
system operational logs, configuration details, and some platform-specific
information:
The uname –a
List of installed kernel modules
The top output (or its equivalent)
Processor number and type
The Lsof output
The netstat output
Disk free statistics
The uptime output
The ps –ef output
Copy of /etc/services
Contents of /etc/inittab
Platform-specific information (release, memory, boot information, and so on)
You also can start diagnostic testing directly from Guardium GUI on the
appliance. When you are starting from the GUI, the output is placed in the
/var/tmp directory.
The output of the diag.bat is placed in the diag subdirectory. The output is
compressed into a single .zip file. Diagnostic data is logically grouped among
following files:
stap.txt
tasks.txt
system.txt
evtlog.txt or evtlog2008.txt
reg.txt
Similar to guard_diag, the output of the diag.bat utility contains Guardium and
system operational logs, configuration details, and platform-specific information.
The follow diagnostic information is collected by diag.bat:
Content of guard_tap.ini
The Guardium S-TAP installation log
All running tasks
List of all installed kernel drivers
The diagnostic information that is collected by the daig.bat utility is most efficient
when it is collected during problem occurrence or reproduction. When an issue
cannot be reproduced, it is best to run the diag.bat utility immediately after the
issue occurs.
Note: The diag.bat output is required by support for all S-TAP related issues.
Providing this output ahead of time also helps to speed up support’s
investigation.
Note: The Guardium archive function creates signed, encrypted files that
cannot be altered. Do not change the names of the generated archive files.
The archive and restore operations depend on the file names that are created
during the archiving process.
There are times when there is a need to review historical data. When such a
need arises, archive files can be restored on the appliance to restore data.
Note: On Guardium version 8.2 and later, the incremental archive strategy is
used only for the archive that is taken from a collector. Full archive is always
taken from an aggregator for static data to simplify the archive restore
process.
Restored audit data can be viewed as the regular audit data by using interactive
or audit process reports.
Note: The Guardium archive function creates signed, encrypted files that
cannot be tampered with. Do not change the names of the generated archive
files. The archive and restore operations depend on the file names that are
created during the archiving process.
The archive and export activities use an operating system encryption algorithm
to create encrypted data files. The restore system must have the encryption
algorithm that the archive system used to restore the archived data.
The archived files can be restored by retrieving them through the archive catalog.
The Guardium catalog tracks where every archive file is sent so that the archive
files can be retrieved and restored with minimal effort at any point in the future. A
separate catalog is maintained on each appliance. A new record is added to the
catalog whenever the appliance archives data or results.
....... ........
......... ........
To restore the data of April 27 and April 28, click Administration Console →
Data Management → Data Restore, as shown in Figure 7-61. Specify the date
range that you want to restore and optionally the host name of appliance where
archive files where taken, and click Search.
Select the files that you want and then click Restore.
Because the target appliance is not where the archive files are taken, the archive
catalog does not have the archive file entries. To add records manually to the
catalog, click Administration Console → Data Management → Catalog
Archive, specify date range, and then click Search.
Review the returned results, as shown in Figure 7-63 on page 307. If the archive
files that you want are not in the list, click Add to add entries manually.
In the Add location panel (see Figure 7-64), enter the required information and
click Save to add the new entry to catalog.
To export catalog entries from the appliance where the archive files were taken,
click Administration Console → Data Management → Catalog Export, select
file entries that you want to import, and then click Export, as shown in
Figure 7-65.
The tool generates a file that can be imported to the appliance where data is
restored. To import the file, click Catalog Import, upload the catalog entry file,
and select the green check box to import the previously exported entries, as
shown in Figure 7-66 on page 309.
After the entries are added to catalog, use the data restore procedure that is
described in “Scenario 1: Restoring a few days of recent data” on page 305 to
restore the archive files.
Note: The archive files from collectors or from Guardium system before
Version 9.0 are archived by using incremental archive strategy. When you are
restoring such archive files, you must always start from the first of the month
and work your way up to the days you need. For example, if you want to
restore data from 13, 14, and 15 of May from a collector, start with May 1, 2, 3,
and so on until May 15.
If you restore archived files from aggregator from V8 or later, you can restore
only the days that you want; that is, 13, 14 and 15 of May.
Note: We can use only one backup file in such a restore scenario because
restoring from backup file overrides all the existing data.
The results archive that is taken through the Audit Process Build includes the
following information:
Audit tasks results from the following sources:
– Reports
– Assessment tests
– Entity audit trail
– Privacy sets
– Classification processes
View
Sign-off trails
Accommodated comments from workflow processes
The results sets are purged from the system according to the workflow process
definition.
To archive the results set for a particular audit process, select Archive Results in
Audit Process Definition of the Audit Process Builder, as shown in Figure 7-67.
The results sets can be restored only into the investigation center. You can set up
an investigation center by creating a special investigation user account on a
Guardium appliance.
For more information about setting up and using an investigation center, see the
Investigation Center article on the online help at Guardium GUI, which is
available at this website:
http://pic.dhe.ibm.com/infocenter/igsec/v1/index.jsp?topic=%2Fcom.ibm.g
uardium.software.app.install.doc%2FtopicsV90%2Fsoftware_appliance_insta
llation_guide.html
Backup can be started from the user interface (UI) or from a command-line
interface (CLI).
For more information about data archive and restore processes, see 7.4,
“Restoring audit data for forensic analysis” on page 303.
or
>store storage-system TSM <backup|archive> on
Before you restore from Tivoli Storage Manager, the dsm.sys configuration file
must be uploaded to the Guardium appliance by using the CLI command.
Before you restore from EMC Centera, a PEA configuration file must be
uploaded the Guardium appliance through the Data Archive panel.
The following items are not backed up and must be installed manually to
complete disaster recovery process:
License: License is not installed by backup restore; therefore, it must be
installed manually.
SSL Certificate (optional): SSL Certificate is not backed-up; therefore, it must
be installed manually.
In this section, we describe the detail disaster recovery steps for each appliance
type and usage scenario.
Alternatively, to restore data backup, you might consider aggregating data again
from collectors, which depends on the number of collectors and the retention
period requirements on the collectors and aggregator.
We describe the best practices for Guardium Version 8.2 to Version 9.1p100
upgrade process for 32-bit and 64-bit versions. We also describe the differences
in the approaches for each environment.
Red Hat nor MySQL provide an upgrade path to move to their newer software
versions. This limitation dictates the differences in upgrade approaches that are
used for the following 32-bit versus 64-bit platforms:
On 32-bit platform:
– For V9.0 and V9.0p02 appliances, Guardium content is delivered through
Guardium Patch Update (GPU) patch to provide an upgrade path to the
new V9.1p100 (32-bit) version.
– For V8.2 appliances, Guardium provides V8.2 to V9.1p100 (32-bit)
upgrade bundle patch, which is applicable on V8.2 version appliance
regardless of the patching level. The bundle path allows direct upgrade to
V9.1p100 (32-bit) version level.
On 64-bit platform:
– For V9.0p50 64-bit appliance, Guardium content is delivered through
V9.1p100 (64-bit) GPU patch to provide an upgrade path.
– For new appliances or appliances before V9.0p50, content is delivered
through V9.1p100 (64-bit) product ISO, which requires a rebuild of the
appliances. No upgrade path is available for new 64-bit system installation.
– Any system backup that is generated on V8.2 or later version can be
restored on V9.1p100 32-bit and 64-bit system.
Figure 9-1 shows the compatibility between different versions of V8.2 and V9.0.
9.2.2 Aggregation
A 64-bit Guardium appliances cannot aggregate to any 32-bit Guardium
aggregators. It can aggregate only to 64-bit level aggregator, as shown in
Figure 9-2.
For customers that are upgrading to 32-bit Guardium systems, the limitations that
are shown in Figure 9-3 are not relevant. This limitation is also not relevant to
customers who do not use Central Manager and Aggregation combo appliance.
However, customers who use Central Manager and Aggregation combination
appliance and are planning to upgrade to 64-bit Guardium system, this limitation
dictates the upgrade strategy for the entire corresponding enterprise
environment. To maintain uninterrupted communication between appliances on
different versions, customers are required to proceed with a two-step upgrade
approach in which all of the appliances must be upgraded to V9.1p100 32-bit
level before you proceed with the 64-bit implementation.
Figure 9-4 on page 327 shows another view of mixed environment compatibility
limitations for Central Manager and Aggregator functionality.
Choosing the right upgrade strategy is one of the most important decisions to
ensure a smooth and successful upgrade. Strategy often must be adjusted to fit
an operational flow of your organization and workaround multiple known and
unpredictable constrains. We also strongly suggest choosing the strategy that
allows the quickest way to complete the upgrade of all of the appliances across
the entire estate.
Note: Hybrid stage refers to the transition period during the upgrade where
customers have a hybrid Version 8.2 and Version 9.x Guardium solution.
9.4.3 Strategies
The general strategies of the Guardium enterprise solution can be represented
by two models, Horizontal and Vertical, or some variations of mixture of these
two.
Horizontal model
Through horizontal model, users often upgrade all of the appliances of the same
type before moving forward with other appliances. Upgrade all of the Central
Managers first, then upgrade all the aggregators, then all the collectors, and
finish with agents upgrade, as shown in Figure 9-6 on page 330.
Vertical model
Figure 9-7 on page 331 shows the vertical upgrade approach. This method often
requires the upgrade of single Central Manager (1 in the figure), followed by an
aggregator that is managed by this Central Manager, then followed by all the
collectors exporting to this aggregator. You then move on to the next aggregator
and its collectors until all of the managed units are upgraded.
When all of the managed units of the Central Manager (1 in the figure) are
upgraded, the Guardium administrator should proceed with upgrade of Central
Manager (2 in the figure) and its managed units following the same upgrade
order.
This approach also minimizes the time that aggregators and collectors are
required to be the hybrid mode. Staying in hybrid mode might introduce
aggregation performance affects that are related to the continuous dynamic
conversion of the data into the newer version of database structures.
Two-Step approach
The Two-Step procedure approach includes the following steps:
1. Upgrade all of the appliances to V9.1p100 32-bit patch level by using the
suggested vertical model.
2. Rebuild the Central Manager. Following the vertical model to rebuild all of the
appliances to V9.1p100 64-bit level by using v9.1p100 64-bit ISO image.
Note: For more post-9.1p100 patches that might be required before the
restore db-from-prev-version command is run, see the Flashes and Alerts
section of IBM Guardium Customer Support website at:
http://www-947.ibm.com/support/entry/portal/alerts/software/informat
ion_management/infosphere_guardium?productContext=-168397159
Direct approach
The direct approach (as shown in Figure 9-9) provides a logistical way to work
around V9.1p100 compatibility with an earlier version constrains. In fact, the first
action of the direct approach is the same as the two-step approach. It requires
upgrading the central manager to V9.1p100 32-bit level. However, at this point,
the rest of the managed unit appliances can be rebuilt directly to the V9.1p100
64-bit level.
Note: To avoid loss of data collection, you can redirect S-TAPs to different
collectors. However, it is required that both of these collectors are exporting to
the same aggregator.
In this section, we also describe the following most common upgrade methods:
Upgrade appliances by using Guardium V8.2 to V9.0p100 (32-bit) upgrade
bundle.
Rebuild appliances with V9.1p100(64-bit) ISO image and restore the V8.2 or
V9.0 backup.
Note: Do not restart the appliance manually until the installation of all patches
is complete.
2. Install the upgrade bundle for V8.2 to V9.1p100 (32-bit) upgrade. The fix pack
uses the following name:
InfoSphere_Guardium_v8.2_to_9.1p100_Upgrade_Bundle_<Timestamp>
For the appliances that are already on V9.0, upgrading to V9.1p100 requires the
installation of the following patches:
V9.0 Health Check for GPU and Upgrade
V9.0p50 GPU patch (32-bit)
The V9.0p50 GPU patch is applicable on the appliances with any V9.0 patch
level.
Both patches are available for download through the following IBM Fix Central
website:
http://www-933.ibm.com/support/fixcentral/
For 32-bit architecture, this method implies rebuilding by using the original V9.0
ISO image, followed by the installation of V9.1p100 GPU upgrade patch.
This is the only method to upgrade to Guardium version with 64-bit architecture.
Guardium provides direct V9.1p100 64-bit product ISO image. Authorized
customers can find 9.1p100 64-bit ISO image on Passport Advantage.
Note: For data restore and recovery purposes, make sure to generate a
system backup before you begin rebuilding the appliance.
Note: Another post-9.1p100 patch might be required before you run the
restore command. For more information, see the IBM Guardium Customer
Support page (Flashes and Alerts section), which is available at this website:
http://www-947.ibm.com/support/entry/portal/product/information_mana
gement/infosphere_guardium?productContext=-168397159
Note: All of the reports and queries that require pre-upgrade data must be
maintained on old appliances for the time that is equivalent to the purge
period.
However, this method requires at least the temporary expansion of the Guardium
environment with newly configured physical of virtual appliances, which requires
more maintenance efforts through the transition period.
Note: Starting with version V8.x, Guardium supports live (boot-less) KTAP
upgrade, which does not require a reboot of the database server after
installation of new version of S-TAP is complete. The live update mechanism is
controlled through the GUI with the KTAP_LIVE_UPDATE parameter or
BUNDLE-STAP/KTAP installers by using the guard-stap-update utility.
Starting with V9.0p50, pre-upgrade Health Check is also a prerequisite for any
GPU patch installation.
Alternatively, you can also access Guardium CLI and run the show system patch
installed command to check status, as shown in Figure 9-13.
In both cases, the successful completion status is indicated by the “DONE: Patch
installation succeeded” message.
Health check logs are in the diag/current folder and with a file naming
convention, such as health_check_<timestamp>. Figure 9-15 shows sample log
list.
Figure 9-16 shows the content of the Health Check log for a successful upgrade.
This indicates that all checks passed the tests and the appliance is ready for
upgrade.
Figure 9-19 shows a sample patch installation status from the CLI.
You also can access the upgrade installation log by using the CLI file server
command, as shown in Figure 9-20.
Error status often requires a detailed review of the log’s content to determine the
root cause of failure. You can access the patch upgrade log file through the file
server, as shown in Figure 9-24.
If an upgrade fails, the end of the log file indicates the failure, as shown in
Figure 9-26.
Upgrading MustGather
In many cases, upgrade issues require the attention of support personnel. As is
the standard procedure, support engineers require that the basic diagnostic
information that is related to the issue be attached to the PMR. The information
that is gathered includes basic version, patch level, system information, and
installation log files.
The output of MustGather is archived in to a single .tar file for the convenience
of downloading the file from the appliance and uploading the file to the
corresponding PMR.
In this section, we describe how to integrate and reduce the time that is required
to reconcile database changes with the overall change management process.
Figure 10-2 shows the concepts for Enterprise Integrator or data upload.
Figure 10-2 Three-step process to understand the key concept of data upload
Example 10-1 Sample script to add a table to upload into the Guardium appliance
-- Create ChangeRequest table to simulate troubleticketing system
-- The specific table you will need depends on your ticketing system.
-- Please consult the Change Ticketing system vendor for details
Insert into
ChangeRequest(ChangeID,NAME,REQDATE,EXPECTED,DESCRIPTION,AFFECTED,APPRO
VED,COMPLETED)
VALUES('1279','BILL SMITH','05-21-10','05-23-10','Modify Schema to
include new product sales',
'REVENUES', 'Y','05-23-10');
Insert into
ChangeRequest(ChangeID,NAME,REQDATE,EXPECTED,DESCRIPTION,AFFECTED,APPRO
VED,COMPLETED)
Insert into
ChangeRequest(ChangeID,NAME,REQDATE,EXPECTED,DESCRIPTION,AFFECTED,APPRO
VED,COMPLETED)
VALUES('1281','BILL SMITH','05-23-10','06-03-10','Rollup
calculations',
'REVENUES', 'Y','06-03-10');
2. In the Custom Tables tab, select Upload Definition, as shown in Figure 10-4
on page 353).
The goal is to retrieve the table definition by connecting to the external
database table and retrieving the table definition. The table definition is similar
to the SQL “DESCRIBE” statement for Oracle.
3. In the Import Table Structure tab (see Figure 10-5 on page 354), enter the
following information:
– Entity description
This is the entity name in the query that is used later in the process. This
entity allows you to refer to the external ticket table information when you
join this with the DBA’s SQL activity.
– Table Name
This is a name for the internal Guardium table to be used in the report.
– SQL Statement
This is the SQL statement that is used to retrieve the table structure. When
the table structure is retrieved, you can report any element within the table.
This SQL statement references the database table that was created in the
prerequisite script. In production implementations, this can be a table
within a staging database or one that was taken directly from the ticketing
system.
Select Add Datasource.
It is always a good idea to test the connection to make sure that the
credentials are correct, and to make sure that you have IP connectivity to the
database server.
Note: You can use Guardium database activity monitoring to monitor the
activity when the connection is made. The SQL request originates from the
Appliance (10.10.9.248) to the external database (10.10.9.56) in this
example.
If the SQL statement runs successfully, you see the table structure
information (ExternalTickets) from the database that created an internal table
on the Guardium appliance, as shown in Figure 10-10 on page 358.
We use the ID column to link with the Guardium Application Event Value
String for the unified report. Change the Group Type in the upper right column
so that it matches the Application Event Value String. You can now modify any
of the column display names if you want to change any of this information. In
our example, we leave these alone to keep them as they were imported.
Select Apply, and then Back.
This table structure is good for our purposes. We completed the process to
define the table structure. The next step is to upload the data within the external
table.
The other selection in the Maintain Custom Table menu is Manage Table Index.
Click Insert to open Table Index Definition. The pop-up window suggests
columns in the table to add to indexes that are based on columns that are used
on custom domains as Join conditions. Select the columns and click Save.
Indexes are created (or re-created).
The table engine types for custom tables and entitlements (InnoDB and MyISAM)
appear for all predefined custom databases because the data that is stored on
the Guardium internal database is MYSQL-based at the time of this writing.
Complete the following steps to upload the data from the external table:
1. Select Upload Data from the Custom Tables tab, as shown in Figure 10-13
on page 361.
By selecting Upload Data, the Guardium appliance connects to the external
database table and retrieves the content within this table after all of the
configuration information is provided. The information that is retrieved from
the database is then stored within Guardium. This information can be
provided in a separate report or combined with other information to join more
meaningful data in the reporting process.
To connect to the database, you must provide a data source, which has the
appropriate credentials to log in to the database server.
Select Add Datasource, and the Datasource Finder window opens, as shown
in Figure 10-14. We want to add the data source
oracle56-joe_Oracle(Custom Domain), which was defined as shown in
Figure 10-7 on page 355. Select this data source and then click Add.
2. Select Check/Repair (as shown in Figure 10-15 on page 362) to see whether
the SQL statement is valid when it is run on the external Oracle database.
You should see the “Operation ended successfully” message if the new table
structure is valid.
Select Apply.
Note: You have many options for how you want to clean up the external
database table. One option is to use a SQL statement to delete the
contents of the external table by selecting DML command after upload.
This option gives you tremendous flexibility. You can also overwrite all the
previously imported Guardium data when you upload by selecting
Overwrite. Finally, you can schedule this process to happen automatically
by selecting Modify Schedule.
You should see the “Operating ended successful” message when the table
information is populated. This action also validates how many entries were in
the table. In our example, there were 12 entries in the ChangeRequest table.
Select Back.
Data is now uploaded to the Guardium Appliance.
The following special domains are found at the bottom of the window:
– [Custom] Access: This domain captures database traffic.
– [Custom] Exceptions: This domain captures SQL Errors, Failed Logins,
and so on.
– [Custom] Policy Violations: This domain contains security violations that
were triggered by the Policy Rules.
By linking the uploaded information with these custom domains, you can unify
external reporting information to make the reports more meaningful to your
stakeholders (auditors, information security, DBAs, and so on).
We link information with the [Custom] Access domain.
The goal is to create a domain of information for a report with the newly
uploaded data within Guardium.
We create two domains. The first domain consists of only the imported ticket
information. The second domain consists of the imported ticket information
and the monitoring structure for the DBA activity when they place their marker
within the SQL transactions.
The ExternalTickets was moved to the right under Domain entities, as shown
in Figure 10-19 on page 366.
You successfully created the custom domain that includes the external ticket
information only. Now you can create the second domain of information to report
on. This consists of the imported ticket information and the monitoring structure
for the DBA activity when they place their marker within the SQL transactions.
Figure 10-21 Clone of the [Custom] Access domain: change domain name
If you are familiar with Guardium, the Domain entities on the right side are
entities that are shown in the Query builder where you create your reporting.
3. Click Apply. You see the successful message when the Access Domain is
cloned successfully.
Note: It is critical to have the outer join in this linkage so that if the DBA
does not enter a change ticket or enters an incorrect ticket, these items
show up in this report.
You successfully created two domains of information. Now you must create a
query for each domain. Then, you can put these reports on the portal.
4. Enter the following information in the New Query - Overall Details tab, as
shown in Figure 10-28:
– Query Name: ExternalTickets
– Main Entity: ExternalTickets
6. Select Add to My New Reports. When this is completed, you receive the
“Report added to My New Reports pane” message that confirms that the new
report ExternalTickets is added to My New Reports pane in the GUI. Select
Back.
By adding the report to the Portlet, you can now view the uploaded
information.
Repeat the following steps to define the query for the second domain, External
Tickets and DBA Activity:
1. Select Custom query builder from the Custom Report tab.
2. Search the External Tickets and DBA Activity domain that we created, as
shown in Figure 10-30 on page 374.
3. Select Search, then select New from the Query Finder tab, as shown in
Figure 10-31.
5. Select the appropriate entities and add the attributes to the query, as shown in
Figure 10-33 on page 376. This report filters for only DDL changes (Create,
Alter, and Drop). We use the following entities:
– Access Period: Timestamp
– ExternalTickets: Description and ID
– Application Events: Event Value Str and Event User Name
– Client/Server: Client IP, Server IP, and DB User Name
– SQL: SQL
Click Save → Generate Tabular → Add to My New Reports → Done.
This report shows the external ticket information and the DBA SQL activity.
You successfully added two new reports to the “My New Reports” pane.
Figure 10-34 Customize the date range to show information in the report
Figure 10-35 Updated the time frame to show data in the report
Figure 10-36 shows that the external tickets were imported correctly.
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
Table created.
'GUARDAPPEVENT:RELEASE
----------------------
GuardAppEvent:Released
SQL>
Figure 10-37 on page 379 shows the captured audit data that is unified with
the ticket information in a single report. Notice that the ID and Description
fields are from the ticketing system, so the auditor has an easy reference to
the authorized ticket that is associated with this SQL activity.
Figure 10-37 The DBA activity that is unified with the Ticket ID information
5. Select the External Tickets and DBA Activity report from the drop-down list
of the Report Title field, as shown in Figure 10-39. This report should be at
the top of this window because we started the name of this report with a “-” so
that it appears at the top.
Figure 10-39 Find and modify the External tickets and DBA Activity report
7. Modify the column headings in the report from Event Value Str to Ticket
Entered (this is what the DBA entered), as shown in Figure 10-41.
10.In the Report Color Mapping window (see Figure 10-44 on page 383), enter
Ticket Entered in the Column field and select the color red. Click Add to add
this entry into the background color section.
Select CHANGEID column and select the color yellow. Click Add to add this
into the background color section.
If someone does not enter a ticket number in their SQL session, the entry is
highlighted in red.
If someone enters a ticket number that is not within the Change Ticket system
that was imported to Guardium, the entry is highlighted in yellow.
Figure 10-45 Complete the customization of the report by saving the parameters
2. Try the same procedure, but without any Guardium API. The entry without any
ticket is highlighted in red, as shown in Figure 10-48.
Figure 10-48 DBA activity that is highlighted in red without any approved ticket
There are other ways to automate this process to make it easier for the DBA to
enter in ticket information. The login.sql script (as shown in Example 10-3) is
one way to help enter information that can be used for this purpose.
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit
Production
DCOL
-------------------
2013-05-07 22:50:18
Table created.
SQL>
The script automatically prompts for the ticket information and another field
GuardAppEventType, which can be used in the reporting to reconcile these
database changes. This is shown next.
The result is that you are prompted automatically to enter a ticket number for your
session. This greatly automates the ticket integration process for DBAs.
Figure 10-49 shows the result from an audit report.
Figure 10-49 Resulting audit report for the login.sql ticket automation
For more information about different components of this process, see the
following YouTube videos:
Connection Profiling Part 1 of 3: A demonstration of how connection profiling
works in Guardium V9 GPU 50:
http://youtu.be/yRoRkAExVz0
Connection Profiling Part 2 of 3: A how to guide about configuring connection
profiling:
http://youtu.be/bm6nnATDzeU
Connection Profiling Part 3 of 3: This shows the audit process approval on
how to authorize new connections to the database, which includes the
application owner’s involvement in the process:
http://youtu.be/NwndWdCmAic
From a security best practices perspective, it is a good idea to block anything that
is not authorized until someone can review the details for the new connection.
For example, in Figure 10-50 on page 387, the application server IP address of
10.10.9.240 that uses the database user “apps” is an authorized connection to
the database server. Any other connections (that is, not 10.10.9.240), should be
blocked. This is a simple but effective approach to securing the database.
The Guardium security policy that us required for the configuration that is shown
in Figure 10-50 is shown in Figure 10-51 on page 388. You can create a group
“connection profiling list”, for example, that has all of the details of the authorized
connection.
If you want to add a new connection to the database server, you add the new
connection details into the Connection Profiling List group. Similarly, if there is a
connection that is not authorized, you remove this connection from the group. For
more information, see the YouTube video Connection Profiling Part 2 of 3, which
is available at this website:
http://youtu.be/bm6nnATDzeU
In many cases, too much information that is provided to the application owners w
overwhelms them and they do not have a productive conversation with the
information security group. By providing a coarse level of information, such as a
connections report, it allows the information security team to have a productive
conversation with the application owners and bridge the IT-to-Business gap by
showing them the risk that is associated with some of the new connections.
In many cases, predefined service accounts have access to the database server
so that they can fix issues after going through a change control process. In other
cases, a new connection can be seen, which is someone probing to see their
boundaries of their privileges and access control. These are the types of
connections that should be blocked if they have no business need to see the
information. In other cases, this type of implementation can prevent zero day
attacks because you proactively block unauthorized connections that might come
from a worm or virus that is trying to spread within your environment.
For more information about this conversation between information security, DBA,
and the business owners, see the YouTube video, Connection Profiling Part 3 of
3, which is available at this website:
http://youtu.be/NwndWdCmAic
This video shows the audit process approval cycle on how to authorize a new
connection.
The worlds of security, auditing, and data touch many aspects of information
technology. Customers are storing and analyzing a tremendous amount of data.
Some studies suggest that we are generating as much data in two days as we
did from the dawn of man through 2003. It is hard for us to comprehend the effect
of that statement, as about everything is generating data today. It is no wonder
that identifying sensitive information in the vast amount of data and how people
access the data becomes a huge challenge.
This is why it is important to integrate with other products to help give visibility
and synergy from a security and operational perspective. Figure 11-1 on
page 392 shows a sampling of Guardium integration points with other IBM
products, from the Information Management to Security portfolio, to help
customers secure and audit their environment.
archive audit
Tivoli Netcool
Databases
• DB2 [LUW, i, z, native agent] Software Distribution
• Informix Tivoli Provisioning Manager
• IMS
Endpoint Configuration
Data Discovery/Classification Assessment and Patch
• InfoSphere Discovery share discovery
Management
share discovery & classif.
• Business Glossary Tivoli Endpoint Manager
InfoSphere
Static Data Masking Guardium
Optim Data Masking
SIEM
QRadar
Database tools LDAP Directory
• Change Data Capture leverage audit change Security Directory Server
• Query Monitor leverage capture function
share discovery
• Optim Test Data Manager Transaction
• Optim Capture Replay Application
CICS
• InfoSphere Data Stage end-user activity
The following questions can be asked to gauge your level of comfort and maturity
of understanding within this environment:
Is there any sensitive information that is stored in these nodes?
Who is accessing this information?
How many MapReduce jobs were run against this data?
How are privilege users are defined, and what controls are in place to assure
the corporation they are not abusing their privileges?
In the world of Big Data, one of the most critical items to be concerned about is
securing the information that you store in your Hadoop clusters. Figure 11-2 on
page 394 shows a simplistic view of the Big Data comparison.
In database security and auditing, you can monitor the SQL access. For
example, you can monitor select statements to understand who is accessing
sensitive data. In Big Data, you monitor HBASE GET commands or HDFS cat
commands. Grant statements in SQL are roughly similar to chmod and chown
commands within HDFS.
Figure 11-3 on page 395 shows that Guardium can audit a BigInsights
environment. A MapReduce job was submitted to the name node in the Hadoop
cluster. S-TAP is loaded on the name node to copy this information and send it to
the Guardium appliance in real time. Base on your security policy on the
Guardium appliance, you can proactively alert your Security Information and
Event Manager (SIEM) that an unauthorized user submitted a MapReduce job.
MapReduce
jobs
InfoSphere
Guardium
Clients collector
Sensitive data
alert!
Hadoop Cluster
With Version 9 of Guardium, there are predefined reports to help monitor and
audit this environment, as shown in Figure 11-4. The Unauthorized MapReduce
jobs report within InfoSphere Guardium helps monitor who is using the Cluster.
At a high level, the basic DAM functionality includes the following abilities:
Discover new databases in the network, as shown in Figure 11-5. Guardium
can scan the network to discover database servers in the network. This
function helps you identify what resources can have sensitive data that should
be protected.
Classify what sensitive data is within these databases, as shown Figure 11-6
on page 399.
Add this sensitive information database table into the security policy to audit
and monitor who has access to this information
Send real-time alerts that are based on the policies that are defined.
After you discover unknown databases in the network, you can review these
databases to identify the location of your sensitive information. Figure 11-6
shows several ways to locate your sensitive information, from a catalog search,
by permissions, or searching the actual data in the database.
These types of roles identify where the sensitive information is within the
application and the tables that are inside the database. It is a difficult task to
constantly keep this information updated and have the synergies to share this
with the database administrators or security officers that must put the appropriate
controls around this data.
The ability to search for data in various ways to validate that these applications
are monitored correctly is a key element to reduce risk for the organization. The
best security model includes with checks and balances and this should not be
any different when it comes to the most value data asset in the organization,
such as intellectual property, customer, credit card, and personally identifiable
information (PII). This is where you might want to put more security controls
around the sensitive information.
3 Check Policy
On Appliance
4
Policy Violation:
Drop Connection
Session
Terminated
In Figure 11-7, an outsource DBA that is named Joe uses sqlplus to gain access
to credit card information. There is a proactive security policy that prevents this
access because it is beyond his job responsibilities to view information that is
inside the database. In this example, you can also add another element to
quarantine this individual for a predefined period. This method can prevent the
user from accessing anything within the database until security can validate that
their intention is not malicious. The ability to block access, even though you are
the system account in Oracle, db2inst in DB2, SA in SQL Server, and so on, is a
powerful tool to help ensure that the proper security policies are in place to
protect your data. This can be done with a single security policy across your
heterogeneous environment, without changing your application or database
configuration.
9 Cross-DBMS policies
Application
n DB2, MySQL, 9 Mask sensitive data
SQL
Servers Oracle, 9 No database changes
Unauthorize Sybase, SQL
d Users 1 Server, etc. 9 No application changes
Issue SQL
Actual data
S-TAP
2 stored in the
Outsourced DBA Redact
database
In Figure 11-8, the unauthorized user tries to access credit card information, but
the policy that is enforced with S-TAP masks the result set so that only a partial
PAN is displayed. This masking feature is an important part of the overall security
policy that can be implemented to protect your sensitive data.
Figure 11-9 on page 403 shows three areas of the assessment process.
Vulnerability assessment architecture contains three components: database
layer assessment tests, operating system assessment tests, and behavior
activity assessment tests.
The basic assessment of the database is done within the first tier of the
assessment at the database tier. This is where database permissions, roles,
configuration parameters, and the database version are defined. An example of
permission might be to grant the DBA role to someone. This is a powerful
permission and should not be given lightly because this person has all access to
the information that is inside the database. Another common configuration is the
database version. In many large organizations, it is important to understand what
version and configuration of the database is running in production. The
vulnerability assessment module allows you to verify this information.
The second level of vulnerability assessment occurs at the operating system tier.
The database is like any application that runs on the operating system. There are
parameters at the operating system that control the security of the database
application.
The third area of vulnerability assessment is concerned with the behavior or user
activity of the database server. In this area, it is important to understand usage
patterns to identify any potential area of compromise or misuse of the database
system, as shown in the following examples:
Some customers configure their systems so that only a single IP address can
log in to the database server with DBA privileges. If a database administrator
logs in to the system from many different IP addresses, this issue is a concern
that highlights this account might be shared by many individuals or their
security policy is compromised.
Database activity is using privileged accounts for most of the application
work. This situation violates the concept of least privileges, which states that
you need only enough privilege to perform your task without having too many
privileges to exceed your job responsibility.
Excessive number of SQL Errors is another key behavioral test. This can
indicate whether the application is running poorly or might be compromised
by a SQL injection type of attack.
Some other advanced functionality includes entitlement reports that identify who
has what permissions and roles within the database. Many auditors look to
identify who is a privilege user in the database. The entitlement report allows you
to easily determine this information.
Figure 11-10 on page 405 shows the heterogeneous support (DB2, Informix, MS
SQL Server, MySQL, Netezza, Oracle, PostgreSQL, Sybase, and Teradata) for
entitlement reports within Guardium. These reports can be automatically
distributed for audit review by using the Audit Process facility to validate
appropriate permissions within the database.
Production Archives
Retrieved Retrieve
Historical Data
Current
Figure 11-12 on page 407 shows an example of the predefined groups for
identifying archive candidates.
http://pic.dhe.ibm.com/infocenter/igsec/v1/topic/com.ibm.guardium91.doc
/how_to/topics/how-to_guide_overview.html
Figure 11-13 Predefined data warehouse reports to help identify archive candidates
Guardium can
suggest
Production archive Archives
candidates
Retrieved Retrieve
Historical Data
Current
Optim sends
access requests
to Guardium
Universal Access to Application Data
Figure 11-14 Integrated with IBM InfoSphere Optim Data Growth Solution
The optim-audit role has a predefined report about who accessed the archive
logs. If the user is granted access to this role, these predefined reports are
available in the user’s portal.
For Optim Archive to send this information to Guardium, you must enable the
auditing within the Optim Archive application by completing the following steps:
1. Select Audit Selection… from General tab of product options.
2. In Audit Facility window, select Hosted by Guardium or Hosted by
Optim/Guardium from the Audit Status drop-down menu.
3. Click Guardium Settings…
4. In Advanced Setting (Guardium setting), provide the IP address and DNS
name of the Guardium appliance to send the audit information to be stored.
Current
Developers QA
During this process of taking a subset of your production data, you can apply
masking policies so that you can desensitize data. This process is sometimes
referred to as static data masking (SDM) because you statically mask the data
when you extract it from your production database server. This is important so
that you do not have actual sensitive data in your test and application
development environment where there might not be the strict security policies
like your production environment.
IBM Optim Test Data Management Solution and Guardium can share these
masking policies. If an unauthorized user accesses sensitive data on your
production server, you can perform dynamic data masking (DDM) to redact the
result set to protect this information. Guardium can be used for dynamic data
masking so that this unauthorized access can be dynamically masked similar to
your test and development environment. You can configure this feature by
exporting and importing the policy from Guardium and IBM Optim with eXtensible
Access Control Markup Language (XACML), which is an industry-standard
access control policy language.
Masking policies assume that you understand where your sensitive data is. It can
be difficult to track this information. This is why Guardium and InfoSphere
Discovery can help automate the process of finding your sensitive information.
InfoSphere Discovery can then use this information to build the business object,
which might be required to understand the data model for archiving database
objects.
Figure 11-21 on page 415 shows that InfoSphere Discovery can exchange the
sensitive data location with Guardium so that appropriate security and audit
policies can be applied according to corporate security standards.
Now that we understand where our sensitive data is, it is important to identify
who is using it. In many cases, this can be a challenge, especially in a three-tier
environment.
To answer the question, you need more information because the only piece of
information you have is that Apps (the database user) performed some
transactions to the database. So, you do not know whether it was Joe or Bob.
This is a common scenario in which a single database user shares this
connection to the database. Guardium addresses this problem by using the
following methods:
Custom identification procedures
GuardAppEvents and GuardAppUser
Set client user
WebSphere application user information
CICS® application user
In this example, the stored procedure is called so that we extract the actual user
in position 1 of the stored procedure execution. This is identified as Application
Username Position:1 in Figure 11-23. The result of configuring custom
identification procedures is that you can now uniquely and deterministically
identify who performed the transactions.
In Figure 11-24 on page 418, under the Application User field in the audit report,
“joe” performed some transactions and “bob” performed other transactions to the
database based on the pooled application user “apps”. This is determined when
the stored procedure, AppEndUser, is run and extracting the first parameter ‘joe’
or ‘bob’.
Figure 11-25 GuardAppEvents can be used to provide extra context in a pooled database user connection
In the report that is shown in Figure 11-25, the following points are important:
These transactions belong to the same database session ID, 1059 in this
case. This is typical of a pooled database user connection.
The DB User Name is “APPS”, which is the only database user that is defined
for this transaction.
Each of the database transactions can now be identified uniquely in the Event
User Name field for “Joe” and “Bob”.
You can add context through the GuardAppEventType,
GuardAppEventStrValue, and GuardAppEventNumValue. Some customers
add client IP, client host name, and other relevant information that is specific
to their application.
The following syntax is for IBM DB2:
SELECT 'GuardAppEvent:START', 'GuardAppEventUsername:Joe',
'GuardAppEventType:yourStringHere1',
'GuardAppEventStrValue:yourStringHere2',
'GuardAppEventNumValue:4321' FROM SYSIBM.SYSDUMMY1
GuardAppUser is similar, but you cannot add contextual information as you can
with GuardAppEvents. The following syntax is for GuardAppUser DB2:
Select 'GuardAppUser:Joe' From Sysibm.Sysdummy1
Figure 11-26 shows a GuardAppUser DB2 example to help identify unique users
in a pooled connection environment.
Figure 11-27 also shows that the Application User information is sent by the
application during the new connection to the database. This depends on how the
application is written to determine whether this other context is provided.
Depending on the SAP version, it also sends more information to help identify the
SAP application user that is shown in Figure 11-28 on page 422. SAP application
user DDIC performs an SU01 T-Code to add a user. This information can be
audited by Guardium without any application changes.
http://www.ibm.com/developerworks/data/library/techarticle/dm-1208monit
ordbactivity/index.html
If you are connecting to a DB2 database with WebSphere, you also might want to
provide DB2 Client Info context. In this example, you can add code after you
open the connection to the database server, as shown in Figure 11-29.
Figure 11-29 DB2 Client Info parameters to provide extra application context
Figure 11-30 CICS configuration to send more information to identify the application user
In some cases, custom applications in the CICS environment might use Cobol to
interact with DB2. These programs use more static SQL versus dynamic SQL.
For these types of applications, you can also use the GuardAppEvent with Static
SQL and bind variables. The key is that this information is sent to the database.
Where the bind variables are mapped as follows (in this sequence), as shown in
Figure 11-31 on page 425:
Event User Name: JoeD
Event Value Str: EventStrValue:RECONCILE
Event Type: EventType:ChangeRequest
Event Value Num:1281
In Figure 11-31, the extra application information is sent to the database, and
Guardium extracts these bind variables into the Application Events fields (Event
Username, Event Value Str, Event Type, Event Value Num). This information is
carried through the session when the next SQL statement is run, (Select * from
CreditCard where CardID=? and Name like?), as shown in Figure 11-32. The
Application Events are carried throughout the same session (session ID 8).
Figure 11-32 GuardAppEvents with static SQL and bind variables Session ID 8
Providing more application information can help identify the unique users that
provided the transaction. You also can have WebSphere propagate its user
identity to CICS and then propagate this information to DB2. For more
information about the WebSphere to CICS identity propagation, see this website:
http://pic.dhe.ibm.com/infocenter/cicsts/v4r1/index.jsp?topic=%2Fcom.ib
m.cics.ts.doc%2Fdfht5%2Ftopics%2Fidprop_intro.html
Real-time alerts is the process of identifying a security policy violation within the
database or Big Data environment that needs immediate security attention.
These violations are defined within the policy on the Guardium system. After the
policies are defined, the Guardium collector can send a real-time alert in the Log
Event Extended Format (LEEF) through a syslog connection to the QRadar
system, as shown in Figure 11-36.
Application Servers
Big Data & QRadar
Database Servers
Host-based
Probes (S-TAPs) Guardium
Audit Appliance
records
A SIEM system provides real-time analysis of security events that are generated
by various devices on your network. Figure 11-37 on page 430 shows some of
these types of devices and the information that is generated through a Guardium
solution.
The challenge with a SIEM is to take millions of security events and reduce them
to actionable incidents. In general, you do not want to forward all of the database
and Big Data activities to the SIEM because doing so generates too much noise.
However, you want to send a subset of information that violates a security policy,
as shown in the following examples:
SQL errors on a production database server
Failed login to the database after five attempts within 5 minutes
Unauthorized users who are accessing credit card information that is stored
in the Big Data or database server
Figure 11-38 Real-time security policy violations that are sent from Guardium to QRadar
Figure 11-39 Guardium sends security events in the LEEF format for QRadar to parse correctly
Another aspect of security is the risk that is associated with devices that have
known vulnerabilities. If a device has a known vulnerability, it might be used to
obtain access to that device. After access to that device is obtained, the hacker
can steal the data or hop to another device in the network to get access to their
goal. There are industry best practices concerning managing vulnerabilities. For
more information, see the following Common Vulnerabilities and Exposures
(CVE) website:
http://cve.mitre.org/
These results are important to understand which security tests failed and those
that succeeded. For example, the Security Assessment test that is shown in
Figure 11-40 on page 434 has a risk score of 35%. In the Result Summary
section, there is a high-level summary of which tests passed or failed, depending
on the category of the assessment test.
Also included is a section for recommendations that you can use to prioritize the
remediation of failed tests. After you have this data, you have the option of
exporting this to the QRadar system. In this particular example, we are only
concerned about the failed CVE tests (the tests that include known vulnerabilities
according to industry security best practices) that are sent to QRadar. We export
this in the AXIS format, and QRadar associates these failed tests with the
database asset.
In this example, the database asset has 128 failed CVE tests. This allows us to
identify which CVE tests we want to remediate and to track the asset over time.
Ideally, on the next security assessment run, we have less than 128 failed CVE
tests because they are remediated.
If you want to send the entire security assessment results to QRadar, you use the
SCAP format. This format includes passed and failed tests where the CVE
information is a subset of the overall Security Assessment tests. All of these tests
can be automated by distributing these tests with the workflow system of
Guardium called the Audit Process.
Guardium has an SNMP MIB that can be incorporated with Netcool®. You
configure the SNMP parameters by clicking Administration console → Alerter,
as shown in Figure 11-41.
Figure 11-41 SNMP parameters that are configured in the administration console
To poll the Guardium device or for Guardium to send an SNMP trap, you must
configure the passwords. The passwords for SNMP are defined within the SNMP
community strings, as shown in Figure 11-41.
There are two complementary methods for a user to receive SNMP information
from a Guardium appliance: Traps and Polling.
Traps
Traps are unsolicited alerts that are generated by an appliance and sent to an
SNMP manager, such as Netcool. In the following subsections, we describe how
to use SNMP to poll the Guardium Appliance.
Polling
In a polling scenario, an SNMP management system or a user queries the
Guardium appliance by using standard SNMP commands. The SNMP
management system can then send alerts that are based on user-defined
thresholds.
The Guardium appliance provides standard metrics (by using the MIBs,
UCD-SNMP-MIB, and HOST-RESOURCES-MIB) to monitor the health of the
machine and a set of custom metrics (by using extensions in the MIB and
UCD-SNMP-MIB), which provides information that is specific to the Guardium
appliance.
Standard metrics
Displaying data that is relevant to any server, these metrics measure key
performance statistics, such as memory usage, disk usage, and CPU usage.
A full list of Guardium SNMP OIDs is available in the Monitoring via SNMP
section of the Guardium Administration Help Book Guide, which is available at
this website:
http://pic.dhe.ibm.com/infocenter/igsec/v1/topic/com.ibm.guardium91.doc
/administer/topics/monitoring_via_snmp.html
Use the following command to retrieve information about one metric by using the
numeric object identifier (OID):
#snmpget -v 1 -c guardiumsnmp supp8.guardium.com
.1.3.6.1.4.1.2021.9.1.7.1
UCD-SNMP-MIB::dskAvail.1 = INTEGER: 472296
Removing the 1 at the end of the line gives you the status on all of the available
disks (use snmpwalk instead of snmpget to retrieve multiple metrics), as shown in
the following example:
# snmpwalk -v 1 -c guardiumsnmp supp8.guardium.com dskAvail
UCD-SNMP-MIB::dskAvail.1 = INTEGER: 472296
UCD-SNMP-MIB::dskAvail.2 = INTEGER: 60494636
Finally, querying on disk provides all disk information in this subsection of the
UCD-SNMP-MIB, as shown in the following example:
# snmpwalk -v 1 -c guardiumsnmp supp8.guardium.com dsk
UCD-SNMP-MIB::dskIndex.1 = INTEGER: 1
UCD-SNMP-MIB::dskIndex.2 = INTEGER: 2
UCD-SNMP-MIB::dskPath.1 = STRING: /
UCD-SNMP-MIB::dskPath.2 = STRING: /var
UCD-SNMP-MIB::dskDevice.1 = STRING: /dev/sda5…
As you can see, there is much information that is available to monitor the health
of the Guardium appliance.
Figure 11-43 on page 439 shows the Tivoli Access Manager Enterprise Single
Sign-On wallet with the new information for 10.10.9.248, which is the Guardium
appliance.
The next time the user accesses the Guardium appliance through the web
browser, their credentials are automatically presented in the login form, as shown
in Figure 11-44.
The Single Sign On method is an excellent way to store all of the user name and
password information that is required for applications in an enterprise
environment.
Figure 11-45 Tivoli Directory integration with Guardium to help provide database access control
In this process, new users can be added or deleted from Tivoli Directory. These
users are imported into an authorized group on the Guardium system regularly,
as shown in step 2 of Figure 11-45.
Some customers might use Active Directory (AD) for this definition. In this case,
SamAccountName is a common LDAP attribute that is used to import the Active
Directory users from the Authorized Database Users group, as shown in
Figure 11-46 on page 441. Users Joe and Joed were imported because they
were members of the Authorized Database Users group within AD.
The configuration of what to import from the LDAP server is flexible and can
manage different LDAP attributes to satisfy a customer’s unique requirements.
Figure 11-47 Policy definition to allow authorized LDAP Users access to the database
This example shows how to use predefined information that is in the LDAP server
to control access to the database. You can also use information that is defined in
the LDAP server to enhance your audit reports. Some common examples are to
add employee department, manager name, location, and other attributes into
these audit reports.
Figure 11-48 shows how Guardium uses the Security Compliance Automation
Protocol (SCAP) to provide the vulnerability assessment information of the
database servers, and consolidate that information within Tivoli Endpoint
Manager. This is a powerful mechanism to consolidate and understand the risk of
your database servers within the scope of your endpoint devices.
Figure 11-48 IBM Endpoint Manager integration with Guardium through SCAP
Tivoli Storage Manager helps centralize and automate data protection to help
reduce the risks that are associated with data loss. This highly scalable software
helps you manage more data with less infrastructure and simplified
administration. You can save money, improve service levels, and comply with
data retention regulations. Guardium can use Tivoli Storage Manager to archive
audit data within the same framework as the rest of your backups by using Tivoli
Storage Manager, as shown in Figure 11-49.
Figure 11-49 Archiving Guardium audit data with Tivoli Storage Manager
The publications that are listed in this section are considered particularly suitable
for a more detailed discussion of the topics that are covered in this book.
Online resources
The following websites also are relevant as further information sources:
IBM InfoSphere Guardium V9.1 Information Center:
http://pic.dhe.ibm.com/infocenter/igsec/v1/index.jsp
Processor Value Unit [PVU] licensing for Distributed Software:
http://www-01.ibm.com/software/lotus/passportadvantage/pvu_licensing
_for_customers.html