AdministrationGuide Implementation
AdministrationGuide Implementation
DB2 Version 9
for Linux, UNIX, and Windows
SC10-4221-00
DB2 ®
DB2 Version 9
for Linux, UNIX, and Windows
SC10-4221-00
Before using this information and the product it supports, be sure to read the general information under Notices.
Edition Notice
This document contains proprietary information of IBM. It is provided under a license agreement and is protected
by copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative.
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at www.ibm.com/
planetwide
To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU
(426-4968).
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 1993, 2006. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About this book . . . . . . . . . . . ix Configuration files and parameters . . . . . 80
Who should use this book . . . . . . . . . . x Database history file . . . . . . . . . . 87
How this book is structured . . . . . . . . . x
Chapter 2. Creating and using the DB2
Part 1. Implementing Your Design . . 1 Administration Server (DAS) . . . . . 91
DB2 Administration Server . . . . . . . . . 91
Creating a DB2 administration server (DAS) . . . 93
Chapter 1. Before creating a database . . 3
Starting and stopping the DB2 administration server
Working with instances . . . . . . . . . . . 4
(DAS) . . . . . . . . . . . . . . . . 94
Starting a DB2 instance (Linux, UNIX) . . . . . 4
Listing the DB2 administration server (DAS) . . . 95
Starting a DB2 instance (Windows) . . . . . . 4
Configuring the DB2 administration server (DAS) 95
Attaching to and detaching from a non-default
Tools catalog database and DB2 administration
instance of the database manager . . . . . . 5
server (DAS) scheduler setup and configuration . . 96
Grouping objects by schema . . . . . . . . 6
Notification and contact list setup and
Enabling inter-partition query parallelism. . . . 7
configuration . . . . . . . . . . . . . 100
Enabling intra-partition parallelism for queries . . 7
DB2 administration server (DAS) Java virtual
Enabling intra-partition parallelism for utilities . . 8
computer setup . . . . . . . . . . . . . 101
Enabling large page support in a 64-bit
Security considerations for the DB2 administration
environment (AIX) . . . . . . . . . . . 12
server (DAS) on Windows . . . . . . . . . 102
Stopping an instance (Linux, UNIX) . . . . . 13
Updating the DB2 administration server (DAS) on
Stopping an instance (Windows) . . . . . . 14
UNIX . . . . . . . . . . . . . . . . 102
Working with multiple DB2 copies . . . . . . 15
Removing the DB2 administration server (DAS) 103
Multiple DB2 copies roadmap . . . . . . . 15
Setting up DB2 administration server (DAS) with
Multiple instances of the database manager . . 16
Enterprise Server Edition (ESE) systems . . . . 104
Multiple DB2 copies on the same computer
DB2 administration server (DAS) configuration on
(Windows). . . . . . . . . . . . . . 17
Enterprise Server Edition (ESE) systems . . . . 106
Changing the Default DB2 copy after installation
Discovery of administration servers, instances, and
(Windows). . . . . . . . . . . . . . 21
databases . . . . . . . . . . . . . . . 107
Client connectivity using multiple DB2 copies
Discovering and hiding server instances and
(Windows). . . . . . . . . . . . . . 22
databases . . . . . . . . . . . . . . . 108
Setting the DAS when running multiple DB2
Setting discovery parameters . . . . . . . . 109
copies (Windows) . . . . . . . . . . . 24
Setting up the DB2 administration server (DAS) to
Setting the default instance when using multiple
use the Configuration Assistant and the Control
DB2 copies (Windows). . . . . . . . . . 25
Center . . . . . . . . . . . . . . . . 110
Managing DB2 copies (Windows) . . . . . . 26
Updating a DB2 administration server (DAS)
Running multiple instances concurrently
configuration for discovery . . . . . . . . . 110
(Windows). . . . . . . . . . . . . . 27
DB2 administration server (DAS) first failure data
Removing DB2 copies (Linux, UNIX, and
capture (FFDC) . . . . . . . . . . . . . 111
Windows) . . . . . . . . . . . . . . 28
Working with partitioned databases . . . . . . 29
Management of database server capacity . . . 29 Chapter 3. Creating a database . . . . 113
Multiple logical partitions . . . . . . . . 30 Creating a database . . . . . . . . . . . 113
Fast communications manager (FCM) Initial database partition groups . . . . . . . 115
communications . . . . . . . . . . . . 32 Creating and managing database partitions and
Preparing to create a database . . . . . . . . 33 database partition groups . . . . . . . . . 115
Designing logical and physical database Creating database partition groups . . . . . 115
characteristics . . . . . . . . . . . . 34 Managing database partitions . . . . . . . 116
Instance creation . . . . . . . . . . . . 34 Adding and dropping database partitions . . . 119
Instance management . . . . . . . . . . 36 Redistributing data in a database partition
Setting the DB2 environment automatically on group . . . . . . . . . . . . . . . 128
UNIX . . . . . . . . . . . . . . . 43 Error recovery when adding database partitions 128
Setting the DB2 environment manually on UNIX 44 Issuing commands to multiple database
Automatic client rerouting . . . . . . . . 44 partitions . . . . . . . . . . . . . . 130
Automatic storage . . . . . . . . . . . 54 Using Windows database partition servers . . 143
License management . . . . . . . . . . 64 Creating table spaces . . . . . . . . . . . 147
Registry and environment variables . . . . . 65 Table spaces . . . . . . . . . . . . . 148
Contents v
Setting a command statement termination Details on privileges, authorities, and authorization 506
character . . . . . . . . . . . . . . 434 System administration authority (SYSADM) . . 506
Setting up access to DB2 contextual help and System control authority (SYSCTRL). . . . . 507
documentation . . . . . . . . . . . . 435 System maintenance authority (SYSMAINT) . . 508
Setting startup and default options for the DB2 Security administration authority (SECADM) 508
administration tools . . . . . . . . . . 436 Database administration authority (DBADM) 509
Changing the fonts for menus and text . . . . 437 System monitor authority (SYSMON) . . . . 510
Setting DB2 UDB OS/390 and z/OS utility LOAD authority . . . . . . . . . . . 511
execution options . . . . . . . . . . . 437 Database authorities . . . . . . . . . . 511
DB2 for z/OS health monitor . . . . . . . 441 Authorization ID privileges. . . . . . . . 513
Enabling or disabling notification using the Implicit schema authority (IMPLICIT_SCHEMA)
Health Center Status Beacon . . . . . . . 448 considerations . . . . . . . . . . . . 513
Setting the default scheduling scheme . . . . 449 Schema privileges . . . . . . . . . . . 514
Setting Command Editor options . . . . . . 449 Table space privileges . . . . . . . . . 515
Setting IMS options . . . . . . . . . . 450 Table and view privileges . . . . . . . . 515
Visual Explain . . . . . . . . . . . . . 451 Package privileges . . . . . . . . . . . 517
Visual Explain overview . . . . . . . . . 451 Index privileges . . . . . . . . . . . 518
Visual Explain concepts . . . . . . . . . 452 Sequence privileges . . . . . . . . . . 518
Dynamically explaining an SQL or an XQuery Routine privileges . . . . . . . . . . . 518
statement . . . . . . . . . . . . . . 464 Controlling access to database objects . . . . . 519
Creating an access plan using the Command Details on controlling access to database objects 519
Editor . . . . . . . . . . . . . . . 465 Granting privileges . . . . . . . . . . 519
Explain tables . . . . . . . . . . . . 466 Revoking privileges . . . . . . . . . . 521
Guidelines for creating indexes . . . . . . 467 Managing implicit authorizations by creating
Out-of-date access plans . . . . . . . . . 467 and dropping objects . . . . . . . . . . 522
Retrieving the access plan when using Establishing ownership of a package . . . . 523
LONGDATACOMPAT . . . . . . . . . 468 Indirect privileges through a package . . . . 523
Using RUNSTATS . . . . . . . . . . . 468 Indirect privileges through a package containing
Viewing SQL or XQuery statement details and nicknames . . . . . . . . . . . . . 524
statistics . . . . . . . . . . . . . . 469 Controlling access to data with views . . . . 525
Viewing a graphical representation of an access Monitoring access to data using the audit
plan . . . . . . . . . . . . . . . 473 facility . . . . . . . . . . . . . . . 527
Viewing explainable statements for a package 474 Data encryption . . . . . . . . . . . 527
Viewing the history of previously explained Granting database authorities to new groups 529
query statements . . . . . . . . . . . 476 Granting database authorities to new users . . 529
Visual Explain support for earlier and later Granting privileges to new groups . . . . . 530
releases . . . . . . . . . . . . . . 478 Granting privileges to new users . . . . . . 534
Label-based access control (LBAC) . . . . . . 538
Label-based access control (LBAC) overview 538
Part 2. Database Security . . . . . 479
LBAC security policies . . . . . . . . . 540
LBAC security label components . . . . . . 541
Chapter 8. Controlling database LBAC security labels . . . . . . . . . . 547
access . . . . . . . . . . . . . . 481 Format for security label values . . . . . . 549
Security issues when installing the DB2 database How LBAC security labels are compared . . . 550
manager . . . . . . . . . . . . . . . 481 LBAC rule sets . . . . . . . . . . . . 551
Acquiring Windows users’ group information LBAC rule exemptions . . . . . . . . . 556
using an access token. . . . . . . . . . . 483 Built-in functions for dealing with LBAC
Details on security based on operating system . . 485 security labels . . . . . . . . . . . . 557
Windows platform security considerations for Protection of data using LBAC . . . . . . 558
users . . . . . . . . . . . . . . . 485 Reading of LBAC protected data . . . . . . 560
Windows local system account support . . . . 485 Inserting of LBAC protected data . . . . . . 563
Extended Windows security using DB2ADMNS Updating of LBAC protected data . . . . . 565
and DB2USERS groups . . . . . . . . . 486 Deleting or dropping of LBAC protected data 569
UNIX platform security considerations for users 489 Removal of LBAC protection from data . . . 572
Location of the instance directory . . . . . 489 Lightweight directory access protocol (LDAP)
Security plug-ins . . . . . . . . . . . 490 directory services . . . . . . . . . . . . 573
Authentication methods for your server . . . . 490 Lightweight Directory Access Protocol (LDAP)
Authentication considerations for remote clients 495 overview . . . . . . . . . . . . . . 573
Partitioned database authentication considerations 496 Supported LDAP client and server
Kerberos authentication details . . . . . . . 496 configurations . . . . . . . . . . . . 575
Authorization, privileges, and object ownership 501 Support for Active Directory . . . . . . . 575
Contents vii
Support for global groups (on Windows) . . . . 677 Resetting DB2 performance values . . . . . . 688
Using a backup domain controller with DB2
database systems . . . . . . . . . . . . 677 Appendix E. DB2 Database technical
User authentication with DB2 for Windows . . . 678 information . . . . . . . . . . . . 691
User name and group name restrictions
Overview of the DB2 technical information . . . 691
(Windows) . . . . . . . . . . . . . 678
Documentation feedback . . . . . . . . 691
Groups and user authentication on Windows 679
DB2 technical library in hardcopy or PDF format 692
Trust relationships between domains on
Ordering printed DB2 books . . . . . . . . 694
Windows . . . . . . . . . . . . . . 679
Displaying SQL state help from the command line
DB2 database system and Windows security
processor . . . . . . . . . . . . . . . 695
service. . . . . . . . . . . . . . . 680
Accessing different versions of the DB2
Installing DB2 on a backup domain controller 680
Information Center . . . . . . . . . . . 696
Authentication with groups and domain
Displaying topics in your preferred language in the
security (Windows) . . . . . . . . . . 681
DB2 Information Center . . . . . . . . . . 696
Authentication using an ordered domain list 682
Updating the DB2 Information Center installed on
Domain security support (Windows) . . . . 683
your computer or intranet server . . . . . . . 697
DB2 tutorials . . . . . . . . . . . . . 699
Appendix D. Using the Windows DB2 troubleshooting information . . . . . . . 699
Performance Monitor . . . . . . . . 685 Terms and Conditions . . . . . . . . . . 700
Windows performance monitor introduction . . . 685
Registering DB2 with the Windows performance Appendix F. Notices . . . . . . . . 701
monitor . . . . . . . . . . . . . . . 685 Trademarks . . . . . . . . . . . . . . 703
Enabling remote access to DB2 performance
information . . . . . . . . . . . . . . 686
Index . . . . . . . . . . . . . . . 705
Displaying DB2 database and DB2 Connect
performance values . . . . . . . . . . . 687
Windows performance objects . . . . . . . . 687 Contacting IBM . . . . . . . . . . 719
Accessing remote DB2 database performance
information . . . . . . . . . . . . . . 688
Many of the tasks described in this book can be performed using different
interfaces:
v The command line processor, which allows you to access and manipulate
databases from a command-line interface. From this interface, you can also
execute SQL and XQuery statements and DB2 utility functions. Most examples
in this book illustrate the use of this interface. For more information about using
the command line processor, see the Command Reference.
v The application programming interface, which allows you to execute DB2
utility functions within an application program. For more information about
using the application programming interface, see the Administrative API
Reference.
v The Control Center, which allows you to use a graphical user interface to
manage and administer your data and database components. You can invoke the
Control Center using the db2cc command on a Linux® or Windows® command
line, or using the Start menu on Windows platforms. The Control Center
presents your database components as a hierarchy of objects in an object tree,
which includes your systems, instances, databases, tables, views, triggers, and
indexes. From the tree you can perform actions on your database objects, such as
creating new tables, reorganizing data, configuring and tuning databases, and
backing up and restoring databases, database partitions, and table spaces. In
many cases, wizards and launchpads are available to help you perform these
tasks more quickly and easily.
The Control Center is available in three views:
– Basic. This view provides you with the core DB2 functions. From this view
you can work with all the databases to which you have been granted access,
including their related objects such as tables and stored procedures. It
provides you with the essentials for working with your data.
– Advanced. This view provides you with all of the objects and actions available
in the Control Center. Use this view if you are working in an enterprise
environment and you want to connect to DB2 UDB Version 8 for z/OS or
DB2 Version 9 for z/OS (DB2 for z/OS®) or IMS™.
– Custom. This view provides you with the ability to tailor the Control Center
to your needs. You select the objects and actions that you want to appear in
your view.
For help on using the Control Center, select Getting started from the Help
pull-down on the Control Center window.
Database Security
v Chapter 8, Chapter 8, “Controlling database access,” describes how you can
control access to your database’s resources.
v Chapter 9, Chapter 9, “Auditing DB2 database activities,” describes how you can
detect and monitor unwanted or unanticipated access to data.
Appendixes
v Appendix A, “Conforming to the naming rules,” presents the rules to follow
when naming databases and objects.
v Appendix B, “Using Windows Management Instrumentation (WMI) support,”
provides information about how DB2 can be managed using Windows
Management Instrumentation (WMI).
v Appendix C, “Using Windows security,” describes how DB2 works with
Windows security.
v Appendix D, “Using the Windows Performance Monitor,” describes how to use
the Windows Performance Monitor to collect DB2 performance data.
In this and other chapters, the Control Center method for completing tasks is
highlighted by placing it within a box. This is followed immediately by a
comparable method using the command line, and if applicable, using an API. In
some cases, there may be tasks showing only one method. When working with the
Control Center, recall that you can use the help to obtain more detail than the
overview information found in this manual.
For information on SQL and XQuery statements, refer to the SQL Reference manual.
For information on command line processor commands, refer to the Command
Reference manual. For information on APIs, refer to the Administrative API Reference
manual. For information on the Control Center and other administration tools,
refer to Chapter 7.
This chapter focuses on the information you should know before you create a
database with all of its objects. There are several prerequisite concepts and topics
as well as several tasks you must perform before creating a database.
The chapter following this one contains brief discussions of the various objects that
may be part of the implementation of your database design.Chapter 6 presents
topics you must consider before you alter a database and then explains how to
alter or drop database objects.
For those areas where DB2 Database interacts with the operating system, some of
the topics in this and the following chapters may present operating system-specific
differences. You may be able to take advantage of native operating system
capabilities or differences beyond those offered by DB2 Database. Refer to the
Quick Beginnings manual and operating system documentation for precise
differences.
Prerequisites:
where INSTHOME is the home directory of the instance you want to use.
Procedure:
1. Expand the object tree until you see the Instances folder.
2. Right-click the instance that you want to start, and select start from the pop-up menu.
Note: When you run commands to start or stop an instance’s database manager,
the DB2 database manager applies the command to the current instance. For
more information, see Setting the current instance environment variables.
Related tasks:
v “Setting the current instance environment variables” on page 67
v “Starting a DB2 instance (Windows)” on page 4
v “Stopping an instance (Linux, UNIX)” on page 13
Prerequisites:
Procedure:
1. Expand the object tree until you see the Instances folder.
2. Right-click the instance that you want to start, and select start from the pop-up menu.
Note: When you run commands to start or stop an instance’s database manager,
the DB2 database manager applies the command to the current instance. For
more information, see Setting the current instance environment variables.
The db2start command will launch the DB2 database instance as a Windows
service. The DB2 database instance on Windows can still be run as a process by
specifying the ″/D″ switch when invoking db2start. The DB2 database instance can
also be started as a service using the Control Panel or ″NET START″ command.
Related tasks:
v “Setting the current instance environment variables” on page 67
v “Starting a DB2 instance (Linux, UNIX)” on page 4
v “Stopping an instance (Windows)” on page 14
v “Stopping an instance (Linux, UNIX)” on page 13
Prerequisites:
Procedure:
Chapter 1. Before creating a database 5
To attach to another instance of the database manager using the Control Center:
1. Expand the object tree until you see the Instances folder.
2. Click on the instance you want to attach.
3. Right-click the selected instance name.
4. In the Attach-DB2 window, type your user ID and password, and click OK.
For example, to attach to an instance called testdb2 that was previously cataloged
in the node directory:
db2 attach to testdb2
After performing maintenance activities for the testdb2 instance, you can then
DETACH from that instance by running the following command:
db2 detach
To detach from an instance from a client application, call the sqledtin API.
Related reference:
v “ATTACH command” in Command Reference
v “DETACH command” in Command Reference
Explicit use of the schema occurs when you use the high-order part of a two-part
object name when referring to that object in a statement. For example, USER A
issues a CREATE TABLE statement in schema C as follows:
CREATE TABLE C.X (COL1 INT)
Implicit use of the schema occurs when you do not use the high-order part of a
two-part object name. When this happens, the CURRENT SCHEMA special register
is used to identify the schema name used to complete the high-order part of the
object name. The initial value of CURRENT SCHEMA is the authorization ID of
the current session user. If you want to change this during the current session, you
can use the SET SCHEMA statement to set the special register to another schema
name.
Some objects are created within certain schemas and stored in the system catalog
tables when the database is created.
In dynamic SQL and XQuery statements, a schema qualified object name implicitly
uses the CURRENT SCHEMA special register value as the qualifier for unqualified
Before creating your own objects, you need to consider whether you want to create
them in your own schema or by using a different schema that logically groups the
objects. If you are creating objects that will be shared, using a different schema
name can be very beneficial.
Related concepts:
v “System catalog tables” on page 175
Related tasks:
v “Creating a schema” on page 168
Related reference:
v “CURRENT SCHEMA special register” in SQL Reference, Volume 1
v “SET SCHEMA statement” in SQL Reference, Volume 2
Related concepts:
v “Partitioned database environments” in Administration Guide: Planning
v “Database partition group design” in Administration Guide: Planning
v “Database partition and processor environments” in Administration Guide:
Planning
v “Adding database partitions in a partitioned database environment” on page 123
Related tasks:
v “Redistributing data across database partitions” in Performance Guide
v “Enabling database partitioning in a database” on page 9
v “Enabling intra-partition parallelism for queries” on page 7
You could also use the GET DATABASE CONFIGURATION and the GET
DATABASE MANAGER CONFIGURATION commands to find out the values of
individual entries in a specific database, or in the database manager configuration
file. To modify individual entries for a specific database or in the database
manager configuration file, use the UPDATE DATABASE CONFIGURATION and
the UPDATE DATABASE MANAGER CONFIGURATION commands
respectively.
Chapter 1. Before creating a database 7
Configuration parameters that affect intra-partition parallelism include the
max_querydegree and intra_parallel database manager parameters, and the dft_degree
database parameter.
In order for intra-partition query parallelism to occur, you must modify one or
more database configuration parameters, database manager configuration
parameters, precompile or bind options, or a special register.
intra_parallel
Database manager configuration parameter that specifies whether the
database manager can use intra-partition parallelism. The default is not to
use intra-partition parallelism.
max_querydegree
Database manager configuration parameter that specifies the maximum
degree of intra-partition parallelism that is used for any SQL statement
running on this instance. An SQL statement will not use more than the
number given by this parameter when running parallel operations within a
database partition. The intra_parallel configuration parameter must also be
set to “YES” for the value in max_querydegree is used. The default value for
this configuration parameter is -1. This value means that the system uses
the degree of parallelism determined by the optimizer; otherwise, the
user-specified value is used.
dft_degree
Database configuration parameter that provides the default for the
DEGREE bind option and the CURRENT DEGREE special register. The
default value is 1. A value of ANY means the system uses the degree of
parallelism determined by the optimizer.
DEGREE
Precompile or bind option for static SQL.
CURRENT DEGREE
Special register for dynamic SQL.
Related concepts:
v “Parallel processing for applications” in Performance Guide
v “Parallel processing information” in Performance Guide
Related tasks:
v “Configuring DB2 with configuration parameters” in Performance Guide
Related reference:
v “dft_degree - Default degree configuration parameter” in Performance Guide
v “intra_parallel - Enable intra-partition parallelism configuration parameter” in
Performance Guide
v “max_querydegree - Maximum query degree of parallelism configuration
parameter” in Performance Guide
v “BIND command” in Command Reference
v “PRECOMPILE command” in Command Reference
v “CURRENT DEGREE special register” in SQL Reference, Volume 1
Before creating a multi-partition database, you must select which database partition
will be the catalog partition for the database. You can then create the database
directly from that database partition, or from a remote client that is attached to
that database partition. The database partition to which you attach and execute the
CREATE DATABASE command becomes the catalog partition for that particular
database.
The catalog partition is the database partition on which all system catalog tables
are stored. All access to system tables must go through this database partition. All
federated database objects (for example, wrappers, servers, and nicknames) are
stored in the system catalog tables at this database partition.
If possible, you should create each database in a separate instance. If this is not
possible (that is, you must create more than one database per instance), you should
spread the catalog partitions among the available database partitions. Doing this
reduces contention for catalog information at a single database partition.
Note: You should regularly do a backup of the catalog partition and avoid putting
user data on it (whenever possible), because other data increases the time
required for the backup.
When you create a database, it is automatically created across all the database
partitions defined in the db2nodes.cfg file.
When the first database in the system is created, a system database directory is
formed. It is appended with information about any other databases that you create.
When working on UNIX®, the system database directory is sqldbdir and is located
in the sqllib directory under your home directory, or under the directory where
DB2 database was installed. When working on UNIX, this directory must reside on
a shared file system, (for example, NFS on UNIX platforms) because there is only
one system database directory for all the database partitions that make up the
partitioned database environment. When working on Windows, the system
database directory is located in the instance directory.
Related tasks:
v “Configuring DB2 with configuration parameters” in Performance Guide
Related reference:
v “CREATE DATABASE command” in Command Reference
v “sqlecrea API - Create database” in Administrative API Reference
Related concepts:
v “Load overview” in Data Movement Utilities Guide and Reference
v “Load in a partitioned database environment - overview” in Data Movement
Utilities Guide and Reference
Related reference:
v “BACKUP DATABASE command” in Command Reference
Related reference:
v “RESTORE DATABASE command” in Command Reference
Prerequisites:
You are working in an AIX 5.x or later 64-bit environment. You must have root
authority to work with the AIX operating system commands.
Procedure:
Related concepts:
v “Database managed space” in Administration Guide: Planning
v “System managed space” in Administration Guide: Planning
v “Table space design” in Administration Guide: Planning
Prerequisites:
Restrictions:
The db2stop command can only be run at the server. No database connections are
allowed when running this command; however, if there are any instance
attachments, they are forced off before the instance is stopped.
Note: If command line processor sessions are attached to an instance, you must
run the terminate command to end each session before running the db2stop
command. The db2stop command stops the instance defined by the
DB2INSTANCE environment variable.
Procedure:
1. Expand the object tree until you find the Instances folder.
2. Click each instance you want to stop.
3. Right-click any of the selected instances, and select stop from the pop-up menu.
4. On the Confirm stop window, click OK.
You can use the db2stop command to stop, or drop, individual database partitions
within a partitioned database environment. When working in a partitioned
database environment and you are attempting to drop a logical partition using
db2stop drop nodenum <0>
you must ensure that no users are attempting to access the database. If they are,
you will receive an error message SQL6030N.
Note: When you run commands to start or stop an instance’s database manager,
the DB2 database manager applies the command to the current instance. For
more information, see Setting the current instance environment variables.
Related tasks:
v “Setting the current instance environment variables” on page 67
Related reference:
v “db2stop - Stop DB2 command” in Command Reference
v “TERMINATE command” in Command Reference
Prerequisites:
Restrictions:
The db2stop command can only be run at the server. No database connections are
allowed when running this command; however, if there are any instance
attachments, they are forced off before the DB2 database service is stopped.
Procedure:
1. Expand the object tree until you find the Instances folder.
2. Click each instance you want to stop.
3. Right-click any of the selected instances, and select Stop from the pop-up menu.
4. On the Confirm Stop window, click OK.
Recall that when you are using the DB2 database manager in a partitioned
database environment, each database partition server is started as a service. Each
service must be stopped.
Note: When you run commands to start or stop an instance’s database manager,
the DB2 database manager applies the command to the current instance. For
more information, see Setting the current instance environment variables.
Related tasks:
v “Setting the current instance environment variables” on page 67
Related reference:
v “db2stop - Stop DB2 command” in Command Reference
You might want to have multiple instances to create the following environments:
v Separate your development environment from your production environment.
v Separately tune each environment for the specific applications it will service.
v Protect sensitive information from administrators. For example, you might want
to have your payroll database protected on its own instance so that owners of
other instances will not be able to see payroll data.
DB2 database program files are physically stored in one location on a particular
computer. Each instance that is created points back to this location so that the
program files are not duplicated for each instance created. Several related
databases can be located within a single instance.
Instances are cataloged as either local or remote in the node directory. Your default
instance is defined by the DB2INSTANCE environment variable. You can ATTACH
to other instances to perform maintenance and utility tasks that can only be done
at an instance level, such as creating a database, forcing off applications,
monitoring a database, or updating the database manager configuration. When you
attempt to attach to an instance that is not in your default instance, the node
directory is used to determine how to communicate with that instance.
Related concepts:
v “Multiple instances on a Linux or UNIX operating system” on page 36
v “Multiple instances on a Windows operating system” on page 37
Related tasks:
v “Creating additional instances” on page 38
Related reference:
v “ATTACH command” in Command Reference
v “Multiple DB2 copies roadmap” on page 15
A DB2 copy can contain one or more different DB2 products. This refers to the
group of DB2 products that are installed at the same location.
Differences when multiple DB2 copies are installed on the same computer:
v DB2 Version 8 can coexist with DB2 Version 9, with restrictions described below.
Note: You can have only one copy of the DB2 Information Center installed on
the same system at the same Release level. Specifically, you can have a
Version 8 Information Center and a V9 Information Center, but you
cannot have one Information Center at Version 9 FixPak1 and another at
Version 9 fix pack 2 on the same machine. You can however configure the
DB2 database server to access these Information Centers remotely.
v Only the IBM DB2 .NET Data Provider from the Default copy is registered in the
Global Assembly Cache. If Version 8 is installed with Version 9, the IBM DB2
.NET 2.0 Provider from Version 9 is also registered in the Global Assembly
Cache. Version 8 does not have a 2.0 .NET provider.
v Each DB2 copy must have unique instance names. For a silent install with
NO_CONFIG=YES, the default instance will not be created. However, when you
create the instance after the installation, it must be unique. The name of the
default instance will be the <DB2 copy Name>, if it is less than 8 characters. If it
is more than 8 characters, or if an instance of the same name already exists, a
unique name for the instance is generated to ensure uniqueness. This is done by
replacing any characters that are not valid for the instance name with
underscores and generating the last 2 characters. For performance reasons, the
DB2 Control Center should only be used from one DB2 Copy at a single time on
a machine.
Restrictions:
For Microsoft COM+ applications, it is recommended that you use and distribute
the IBM DB2 Driver for ODBC and CLI with your application instead of the DB2
Runtime Client as only one DB2 Runtime Client can be used for COM+ applications
at a time. The IBM DB2 Driver for ODBC and CLI does not have this restriction.
Microsoft COM+ applications accessing DB2 data sources are only supported with
the default DB2 copy. Concurrent support of COM+ applications accessing
different DB2 copies is not supported. If you have DB2 UDB Version 8 installed,
you can only use DB2 UDB Version 8 to run these applications. If you have DB2
Version 9 or higher installed, you can change the default DB2 copy using the
Default DB2 Copy Selection Wizard, but you can’t use them concurrently.
Version 8 coexistence
DB2 Version 8 and DB2 Version 9 can coexist with the restriction that DB2
Version 8 is set as the Default DB2 copy. This cannot be changed unless
you uninstall Version 8.
On the server, there can be only one DAS version and it administers
instances as follows:
v If the DAS is on Version 9, then it can administer Version 8 and Version
9 instances.
v If the DAS is on Version 8, then it can administer only Version 8
instances. You can migrate your Version 8 DAS, or drop it and create a
new Version 9 DAS to administer the Version 8 and Version 9 instances.
This is required only if you want to use the Control Center to administer
the instances.
Version 8 and Version 9 coexistence and the DB2 .NET Data Provider
In DB2 Version 9, the DB2 .NET Data Provider has System.Transaction
support however, this support is only available for the default DB2 copy.
There can be only one version of the plugins registered on the same computer at
the same time. The version of the plugins that is active will be the version that is
shipped with the Default DB2 copy.
Licensing:
NT Services:
You can use the db2SelectDB2Copy API to select the DB2 copy that you want
your application to use. This API does not require any DLLs. It is statically linked
into your application. You can delay the loading of DB2 libraries and call this API
first before calling any other DB2 APIs. Note that the function cannot be called
more than once for any given process; that is, you cannot switch a process from
one DB2 copy to another.
Each physical partition must use the same DB2 copy name on all computers.
Related concepts:
v “DB2 .NET Data Provider” in Developing ADO.NET and OLE DB Applications
v “What's new for V9.1: Client and connectivity enhancements summary” in
What’s New
v “Introduction to Windows Management Instrumentation (WMI)” on page 671
Related tasks:
v “Creating a DB2 administration server (DAS)” on page 93
v “Changing the Default DB2 copy after installation (Windows)” on page 21
v “Configuring the DB2 administration server (DAS)” on page 95
v “Setting the DAS when running multiple DB2 copies (Windows)” on page 24
v “Migrating the DB2 Administration Server (DAS)” in Migration Guide
Related reference:
v “dasupdt - Update DAS command” in Command Reference
Prerequisites:
Multiple DB2 copies (Version 9 or later) are installed on the same computer.
Restrictions:
Version 8 and Version 9 DB2 copies can coexist on the same machine, however
Version 8 must be the default copy. You cannot change the Version 8 default copy,
nor can you run the Default Copy Switcher command, db2swtch, unless you
uninstall Version 8. If you run the db2swtch command when Version 8 exists on
the system, you will get a message indicating that you cannot change the default
DB2 copy because Version 8 is found on the system.
However, you can work with the Version 9 copy by either running the
db2envar.bat command or by opening the command window from the Start menu
for the copy that you want to work with.
Procedure:
To change the Default DB2 copy using the Default DB2 Selection wizard:
1. Open the Default DB2 Selection wizard: From the Start Menu, select Programs–>IBM
DB2–><DB2 copy name>–>Default Copy Switcher. The Default DB2 Selection wizard
opens.
2. On the Default DB2 Copy page, select the copy that you want to make the default so
that it is highlighted and and click Next to make it the default copy.
3. On the summary page, the wizard indicates the result of the operation.
4. Invoke the dasupdt - Update DAS command to move the DB2 Administration Server
(DAS) to the new default copy.
This procedure switches the current Default DB2 copy to the selected DB2 copy and makes
the necessary changes to the registry. To access and use the new Default DB2 copy, after
you have moved the DAS to the new default copy, open a new command window. You can
still access the original Default DB2 copy by using the shortcuts in the Start menu for the
original Default DB2 copy.
This procedure unregisters the current Default DB2 copy and registers the specified
DB2 copy as the default copy. It also makes the necessary changes to the registry,
to the environment variables, to the ODBC and OLE DB drivers, to the WMI
registration, and to various other objects, and moves the DAS to specified Default
DB2 copy. To access and use the new Default DB2 copy, open a new command
window.
The db2swtch command can be run from any DB2 copy, Version 9 or greater. For
more information on this command, see db2swtch - Switch default DB2 copy
command.
Related concepts:
v “Multiple DB2 copies on the same computer (Windows)” on page 17
Related tasks:
v “Setting the DAS when running multiple DB2 copies (Windows)” on page 24
v “Removing DB2 copies (Linux, UNIX, and Windows)” on page 28
v “Migrating a DB2 server (Windows)” in Migration Guide
Related reference:
v “Multiple DB2 copies roadmap” on page 15
v “dasmigr - Migrate the DB2 administration server command” in Command
Reference
v “dasupdt - Update DAS command” in Command Reference
v “db2envar.bat command” in Command Reference
v “db2swtch - Switch default DB2 copy command” in Command Reference
Note: Only one copy of DB2 can be used within the same process for each of the
following modes of connecting to databases.
Restrictions:
Procedure:
OLE DB
To use a DB2 copy other than the default, in the connection string, specify
the IBMDADB driver name for this DB2 copy, which will be of the format:
IBMDADB2.$DB2_COPY_NAME. Some applications might not have the ability to
change the connection strings without recompiling, therefore these
applications will only work with the Default DB2 copy. If an application
uses the default program id, ibmdadb2, or the default clsid, it will always
use the Default DB2 copy.
Note: If you continue to use the IBMDADB2 provider name, then you will
only be able to access data sources from the default DB2 copy.
ODBC
The ODBC driver contains the DB2 copy Name as part of the driver name.
The default ODBC driver, IBM DB2 ODBC DRIVER, is set to the Default
DB2 copy. The name of the driver for each installation is ″IBM DB2 ODBC
DRIVER - <DB2 Copy Name>″.
Note:
v Only one DB2 copy can be used by the same ODBC application.
v Even when you set up a Data source with the default ODBC
driver, it will be configured to access the DB2 copy that was the
default at the time the Data source was cataloged.
v If you move or migrate instances from one DB2 copy to another,
you will need to reconfigure the associated Data sources.
DB2 .NET Data Provider
The DB2 .NET Data Provider is not accessed by the DB2 copy Name.
Instead, depending on the version of the provider that the application
requires, it finds that version and uses it using the standard methods.
JDBC/SQLJ
JDBC uses the current version of the driver in the classpath. The Type 2
JDBC driver uses the native DLL. By default, the classpath is configured
to point to the default DB2 copy. Running db2envar.bat from the DB2 copy
you want to use will update your PATH and CLASSPATH settings for this
copy.
MMC Snap-in
The MMC Snap-in launches the DB2 Control Center for the Default DB2
copy.
WMI WMI does not support multiple DB2 copies. You can register only one
copy of WMI at a time. To register WMI, follow this process:
v Unregister the WMI Schema extensions.
v Unregister the COM object.
v Register the new COM object.
v Use MOFCOMP to extend the WMI schema.
WMI is not registered during DB2 installation. You still need to complete
the two registration steps. WMI is a selectable feature in DB2 products, in
PE and above. It is not be selected by default, nor is it in the typical install.
CLI applications
CLI applications that dynamically load the DB2 client libraries should use
the LoadLibraryEx API with the LOAD_WITH_ALTERED_SEARCH_PATH
option, instead of the LoadLibrary option. If you do not use the
LoadLibrary option, you will need to specify db2app.dll in the Path by
running db2envar.bat from the bin directory of the DB2 copy that you
Related concepts:
v “Multiple DB2 copies on the same computer (Windows)” on page 17
Related tasks:
v “Changing the Default DB2 copy after installation (Windows)” on page 21
v “Setting the DAS when running multiple DB2 copies (Windows)” on page 24
Related reference:
v “Multiple DB2 copies roadmap” on page 15
On the server, there can be only one DAS version and it administers instances as
follows:
v If the DAS is on Version 9, then it can administrator Version 8 and Version 9
instances.
v If the DAS is on Version 8, then it can administer only Version 8 instances. You
can migrate your Version 8 DAS, or drop it and create a new Version 9 DAS to
administer the Version 8 and Version 9 instances. This is required only if you
want to use the Control Center to administer the instances.
Restrictions:
Only one DAS can be created on a given computer at any given time despite the
number of DB2 copies that are installed on the same computer. This DAS will be
used by all the DB2 copies that are on the same computer. In Version 9 or later, the
DAS can belong to any DB2 copy that is currently installed.
Procedure:
To move the DAS from one DB2 Version 9 copy to another DB2 Version 9 copy, use
the dasupdt - Update DAS command.
You can also use this command when you need to move the DB2 Administration
Server (DAS) to a new Default DB2 copy in the same version.
Note:
Related concepts:
v “DB2 administration server (DAS) configuration on Enterprise Server Edition
(ESE) systems” on page 106
v “Multiple DB2 copies on the same computer (Windows)” on page 17
v “Security considerations for the DB2 administration server (DAS) on Windows”
on page 102
Related tasks:
v “Changing the Default DB2 copy after installation (Windows)” on page 21
v “Configuring the DB2 administration server (DAS)” on page 95
v “Creating a DB2 administration server (DAS)” on page 93
v “Listing the DB2 administration server (DAS)” on page 95
v “Removing the DB2 administration server (DAS)” on page 103
v “Setting up DB2 administration server (DAS) with Enterprise Server Edition
(ESE) systems” on page 104
v “Starting and stopping the DB2 administration server (DAS)” on page 94
v “Tools catalog database and DB2 administration server (DAS) scheduler setup
and configuration” on page 96
Related reference:
v “dasmigr - Migrate the DB2 administration server command” in Command
Reference
v “dasupdt - Update DAS command” in Command Reference
v “Multiple DB2 copies roadmap” on page 15
Note: DB2INSTDEF is the default instance variable that is specific to the current
DB2 copy in use (that is, every DB2 copy has its own DB2INSTDEF).
DB2INSTANCE is set to the current instance you are using.
v If DB2INSTANCE is not set for a particular DB2 copy, then the value of
DB2INSTDEF is used for that DB2 copy.
v DB2INSTANCE is only valid for instances under the DB2 copy that you
are using. However, if you switch copies by running the db2envar.bat
command, DB2INSTANCE will be updated to the value of DB2INSTDEF
for the DB2 copy that you switched to initially.
Procedure:
To set the default instance, you can set the DB2INSTDEF profile registry variable
using the db2set command. When you access a different DB2 copy, you do not
have to change the value of DB2INSTANCE.
Related concepts:
v “Environment variables and the profile registry” on page 65
v “Multiple DB2 copies on the same computer (Windows)” on page 17
Related tasks:
v “Client connectivity using multiple DB2 copies (Windows)” on page 22
v “Setting environment variables on Windows” on page 77
v “Setting the current instance environment variables” on page 67
v “Setting the DAS when running multiple DB2 copies (Windows)” on page 24
Related reference:
v “General registry variables” on page 70
v “Multiple DB2 copies roadmap” on page 15
v “db2set - DB2 profile registry command” in Command Reference
The installation provides the option to migrate DB2 Version 8 (in the same path) or
to install a new DB2 Version 9 Copy without modifying the DB2 Version 8
installation. If you select to migrate, your Version 8 installation will be removed. If
you select to install a new DB2 copy, you can later choose to migrate your
instances using the db2ckmig and db2imigr commands.
You can use the db2iupdt command to move a DB2 instance between different
Version 9 DB2 copies, and the db2imigr command to move a Version 8 instance to
Version 9. See Migrating a DB2 server (Windows) for complete details on how to
migrate to DB2 Version 9.
Note:
v Coexistance of DB2 Version 7 and DB2 Version 9 is not supported.
v Coexistence of a 32-bit DB2 and a 64-bit DB2 on the same Windows X64
computer is not supported.
It is not possible to migrate from a 32-bit X64 DB2 installation at Version 8
to a 64-bit installation at Version 9. Instead, you need to migrate to
Version 9 32-bit to use the X64 DB2 installation to move to 64-bit. The
32-bit version will be removed. If you have more than one 32-bit DB2
copy installed, you will need to move all of your instances to one DB2
In summary, on Windows:
v If you have multiple DB2 Version 9 copies, the installation options are install a
new copy or work with an existing DB2 copy, which you can upgrade or add
new features. The migrate option will only show if you also have a DB2 UDB
Version 8 copy in addition to the DB2 Version 9 copies.
v If DB2 UDB Version 8 is installed, the installation options are migrate the
existing Version 8 copy or install a new DB2 copy.
v If DB2 Version 7 or earlier is installed , the installation displays a message to
indicate that migration to DB2 Version 9 is not supported. You can only install a
new DB2 copy after uninstalling Version 7. In other words, Version 7 and
Version 9 cannot coexist.
Related concepts:
v “Multiple DB2 copies on the same computer (Windows)” on page 17
Related tasks:
v “Migrating a DB2 server (Linux and UNIX)” in Migration Guide
v “Migrating a DB2 server (Windows)” in Migration Guide
v “Migrating DB2 32-bit servers to 64-bit systems (Windows)” in Migration Guide
v “Running multiple instances concurrently (Windows)” on page 27
Related reference:
v “db2ckmig - Database pre-migration tool command” in Command Reference
v “db2imigr - Migrate instance command” in Command Reference
v “db2iupdt - Update instances command” in Command Reference
v “Multiple DB2 copies roadmap” on page 15
Procedure:
To run multiple instances concurrently in the same DB2 copy, use either of the
following methods:
v Using the Control Center:
1. Expand the object tree until you find the Databases folder.
2. Right-click an instance, and select Start from the pop-up menu.
3. Repeat Step 2 until you have started all the instances that you want to run concurrently.
To run multiple instances concurrently in different DB2 copies, use either of the
following methods:
v Using the DB2 command window from the Start → Programs → IBM DB2 → <DB2
Copy Name> → Command Line Tools → DB2 Command Window: the command
window is already set up with the correct environment variables for the
particular DB2 copy chosen.
v Using db2envar.bat from a command window:
1. Open a command window.
2. Run the db2envar.bat file using the fully qualified path for the DB2 copy
that you want the application to use:
<DB2 Copy install dir>\bin\db2envar.bat
After you switch to a particular DB2 copy, use the method specified in the section
above, ″To run multiple instances concurrently in the same DB2 copy″, to start the
instances.
Related concepts:
v “Multiple instances of the database manager” on page 16
Related tasks:
v “Creating additional instances” on page 38
v “Managing DB2 copies (Windows)” on page 26
v “UNIX details when creating instances” on page 39
v “Windows details when creating instances” on page 40
Related reference:
v “Multiple DB2 copies roadmap” on page 15
v “db2envar.bat command” in Command Reference
v “db2start - Start DB2 command” in Command Reference
To uninstall DB2 copies on Linux and UNIX, use the db2_deinstall command from
the DB2 copy that you are using. This command uninstalls installed DB2 products
or features that are in the same install path as the db2_deinstall tool. Use the
db2ls command to see the list of installed DB2 products and features. If one or
more instances are currently associated with a DB2 copy, that DB2 copy cannot be
uninstalled.
To uninstall DB2 copies on Windows operating systems, use one of the following
methods:
v You can uninstall any DB2 copy by using the Windows Add/Remove Control
Panel Applet. The Default DB2 copy will have a the word (default) appended to
it.
v Run the db2unins command from the installed DB2 copy directory
Related concepts:
v “Multiple DB2 copies on the same computer (Windows)” on page 17
v “Multiple DB2 copies on the same computer (Linux and UNIX)” in Installation
and Configuration Supplement
Related reference:
v “Multiple DB2 copies roadmap” on page 15
v “db2_deinstall - Uninstall DB2 products or features command” in Command
Reference
v “db2ls - List installed DB2 products and features command” in Command
Reference
v “db2swtch - Switch default DB2 copy command” in Command Reference
v “db2unins - Uninstall DB2 database product command” in Command Reference
If these simple strategies do not add the capacity you need, consider the following
methods:
v Add processors.
If a single-partition database configuration with a single processor is used to its
maximum capacity, you might either add processors or add database partitions.
The advantage of adding processors is greater processing power. In an SMP
system, processors share memory and storage system resources. All of the
processors are in one system, so there are no additional overhead considerations
such as communication between systems and coordination of tasks between
systems. Utilities in DB2 such as load, backup, and restore can take advantage of
the additional processors. DB2 database supports this environment.
Note: Some operating systems, such as the Solaris operating system, can
dynamically turn processors on- and off-line.
When you scale your system by changing the environment, you should be aware
of the impact that such a change can have on your database procedures such as
loading data, backing up the database, and restoring the database.
When you add a new database partition, you cannot drop or create a database that
takes advantage of the new database partition until the procedure is complete, and
the new server is successfully integrated into the system.
Related concepts:
v “Adding database partitions in a partitioned database environment” on page 123
Configurations that use multiple logical partitions are useful when the system runs
queries on a computer that has symmetric multiprocessor (SMP) architecture. The
ability to configure multiple logical partitions on a computer is also useful if a
computer fails. If a computer fails (causing the database partition server or servers
on it to fail), you can restart the database partition server (or servers) on another
computer using the DB2START NODENUM command. This ensures that user data
remains available.
Another benefit is that multiple logical partitions can exploit SMP hardware
configurations. In addition, because database partitions are smaller, you can obtain
better performance when performing such tasks as backing up and restoring
database partitions and table spaces, and creating indexes.
Related tasks:
v “Configuring multiple logical partitions” on page 31
Related reference:
v “db2start - Start DB2 command” in Command Reference
Note: For Windows, you must use db2ncrt to add a database partition if there is
no database in the system; or, DB2START ADDNODE command if there is
one or more databases. Within Windows, the db2nodes.cfg file should never
be manually edited.
v Restart a logical partition on another processor on which other logical partitions
(nodes) are already running. This allows you to override the hostname and port
number specified for the logical partition in db2nodes.cfg.
Note: For Windows, you must use db2ncrt to add a database partition if there is no
database in the system; or, DB2START ADDNODE command if there is one
or more databases. Within Windows, the db2nodes.cfg file should never be
manually edited.
Use the fully-qualified name for the hostname. The /etc/hosts file also
should use the fully-qualified name. If the fully-qualified name is not used
in the db2nodes.cfg file and in the /etc/hosts file, you might receive error
message SQL30082N RC=3.
You must ensure that you define enough ports in the services file of the
etc directory for FCM communications.
Related concepts:
v “When to use multiple logical partitions” on page 30
Related tasks:
v “Changing node and database configuration files” on page 279
v “Creating a node configuration file” on page 81
Related reference:
v “db2ncrt - Add database partition server to an instance command” in Command
Reference
v “db2start - Start DB2 command” in Command Reference
If the services file of the etc directory is shared, you must ensure that the number
of ports allocated in the file is either greater than or equal to the largest number of
multiple database partitions in the instance. When allocating ports, also ensure that
you account for any processor that can be used as a backup.
If the services file of the etc directory is not shared, the same considerations
apply, with one additional consideration: you must ensure that the entries defined
for the DB2 database instance are the same in all services files of the etc directory
(though other entries that do not apply to your partitioned database environment
do not have to be the same).
If you have multiple database partitions on the same host in an instance, you must
define more than one port for the FCM to use. To do this, include two lines in the
services file of the etc directory to indicate the range of ports you are allocating.
The first line specifies the first port, while the second line indicates the end of the
block of ports. In the following example, five ports are allocated for the instance
sales. This means no processor in the instance has more than five database
partitions. For example,
DB2_sales 9000/tcp
DB2_sales_END 9004/tcp
Note: You must specify END in uppercase only. Also you must ensure that you
include both underscore (_) characters.
Due to the way the FCM infrastructure utilizes TCP sockets and directs network
traffic, FCM users on AIX 5.x should set the kernel parameter ″tcp_nodelayack″ to
1.
Related concepts:
v “Database partition and processor environments” in Administration Guide:
Planning
v “Aggregate registry variables” on page 75
v “The FCM buffer pool and memory requirements” in Performance Guide
Related reference:
v “MPP configuration variables” in Performance Guide
Instance creation
An instance is a logical database manager environment where you catalog
databases and set configuration parameters. Depending on your needs, you can
create more than one instance on the same physical server providing a unique
database server environment for each instance. You can use multiple instances to
do the following:
v Use one instance for a development environment and another instance for a
production environment.
v Tune an instance for a particular environment.
v Restrict access to sensitive information.
v Control the assignment of SYSADM, SYSCTRL, and SYSMAINT authority for
each instance.
v Optimize the database manager configuration for each instance.
v Limit the impact of an instance failure. In the event of an instance failure, only
one instance is affected. Other instances can continue to function normally.
The instance directory stores all information that pertains to a database instance.
You cannot change the location of the instance directory once it is created. The
directory contains:
v The database manager configuration file
v The system database directory
v The node directory
v The node configuration file (db2nodes.cfg)
v Any other files that contain debugging information, such as the exception or
register dump or the call stack for the DB2 database processes.
Terminology:
Bit-width
The number of bits used to address virtual memory: 32-bit and 64-bit are
the most common. This term might be used to refer to the bit-width of an
instance, application code, external routine code. 32-bit application means
the same things as 32-bit width application.
32-bit DB2 instance
A DB2 instance that contains all 32-bit binaries including 32-bit shared
libraries and executables.
64-bit DB2 instance
A DB2 instance that contains 64-bit shared libraries and executables, and
also all 32-bit client application libraries (included for both client and
server), and 32-bit external routine support (included only on a server
instance).
As part of your installation procedure, you create an initial instance of DB2 called
“DB2”. On UNIX, the initial instance can be called anything you want within the
naming rules guidelines. The instance name is used to set up the directory
structure.
To support the immediate use of this instance, the following are set during
installation:
v The environment variable DB2INSTANCE is set to “DB2”.
v The DB2 registry variable DB2INSTDEF is set to “DB2”.
On UNIX, the default can be called anything you want within the naming rules
guidelines.
On Windows, the instance name is the same as the name of the service, so it
should not conflict. No instance name should be the same as another service name.
You must have the correct authorization to create a service.
These settings establish “DB2” as the default instance. You can change the instance
that is used by default, but first you have to create an additional instance.
Before using DB2, the database environment for each user must be updated so that
it can access an instance and run the DB2 database programs. This applies to all
users (including administrative users).
On UNIX operating systems, sample script files are provided to help you set the
database environment. The files are: db2profile for Bourne or Korn shell, and
db2cshrc for C shell. These scripts are located in the sqllib subdirectory under the
home directory of the instance owner. The instance owner or any user belonging to
the instance’s SYSADM group can customize the script for all users of an instance.
Use sqllib/userprofile and sqllib/usercshrc to customize a script for each user.
Related concepts:
v “Multiple instances on a Linux or UNIX operating system” on page 36
v “Multiple instances on a Windows operating system” on page 37
v “About authorities” in Administration Guide: Planning
v “About configuration parameters” in Administration Guide: Planning
v “About databases” in Administration Guide: Planning
v “About the database manager” in Administration Guide: Planning
Related tasks:
v “Adding instances” on page 41
v “Auto-starting instances” on page 42
v “Creating additional instances” on page 38
v “Listing instances” on page 41
v “Running multiple instances concurrently (Windows)” on page 27
v “Setting the current instance environment variables” on page 67
v “UNIX details when creating instances” on page 39
v “Windows details when creating instances” on page 40
Instance management
This section contains additional concepts and tasks related to instance
management.
The instance owner and the group that is the System Administration (SYSADM)
group are associated with every instance. The instance owner and the SYSADM
group are assigned during the process of creating the instance. One user ID or
username can be used for only one instance. That user ID or username is also
referred to as the instance owner.
Each instance owner must have a unique home directory. All of the files necessary
to run the instance are created in the home directory of the instance owner’s user
ID or username.
The primary group of the instance owner is also important. This primary group
automatically becomes the system administration group for the instance and gains
SYSADM authority over the instance. Other user IDs or usernames that are
members of the primary group of the instance owner also gain this level of
authority. For this reason, you might want to assign the instance owner’s user ID
or username to a primary group that is reserved for the administration of
instances. (Also, ensure that you assign a primary group to the instance owner
user ID or username; otherwise, the system-default primary group is used.)
If you already have a group that you want to make the system administration
group for the instance, you can simply assign this group as the primary group
when you create the instance owner user ID or username. To give other users
administration authority on the instance, add them to the group that is assigned as
the system administration group.
To separate SYSADM authority between instances, ensure that each instance owner
user ID or username uses a different primary group. However, if you choose to
have a common SYSADM authority over multiple instances, you can use the same
primary group for multiple instances.
Related tasks:
v “UNIX details when creating instances” on page 39
You can run multiple DB2 database instances concurrently, in the same DB2 copy
or in different DB2 copies.
v To work with an instance in the same DB2 copy, you need to set the
DB2INSTANCE environment variable to the name of the instance before issuing
commands against that instance.
To prevent one instance from accessing the database of another instance, the
database files for an instance are created under a directory that has the same
name as the instance name. For example, when creating a database on drive C:
for instance DB2, the database files are created inside a directory called C:\DB2.
Similarly, when creating a database on drive C: for instance TEST, the database
files are created inside a directory called C:\TEST.
v To work with an instance in different DB2 copies, use either of the following
methods:
– Using the DB2 command window from the Start → Programs → IBM DB2 →
<DB2 Copy Name> → Command Line Tools → DB2 Command Window: the
command window is already set up with the correct environment variables
for the particular DB2 copy chosen.
– Using db2envar.bat from a command window:
1. Open a command window.
2. Run the db2envar.bat file using the fully qualified path for the DB2 copy
that you want the application to use:
<DB2 Copy install dir>\bin\db2envar.bat
Related concepts:
v “High availability” in Data Recovery and High Availability Guide and Reference
Related tasks:
v “Windows details when creating instances” on page 40
Prerequisites:
If you belong to the Administrative group on Windows, or you have root authority
on UNIX platforms, you can add additional DB2 database instances. The computer
where you add the instance becomes the instance-owning computer (node zero).
Ensure that you add instances on a computer where a DB2 administration server
resides.
Procedure:
When using the db2icrt command to add another DB2 instance, you should
provide the login name of the instance owner and optionally specify the
You can change the location of the instance directory from DB2PATH using the
DB2INSTPROF environment variable. You require write-access for the instance
directory. If you want the directories created in a path other than DB2PATH, you
have to set DB2INSTPROF before entering the db2icrt command.
For DB2 Enterprise Server Edition, you also need to declare that you are adding a
new instance that is a partitioned database system. In addition, when working
with a ESE instance having more than one database partition, and working with
Fast Communication Manager (FCM), you can have multiple connections between
database partitions by defining more TCP/IP ports when creating the instance. For
example, for Windows operating systems, use the db2icrt command with the -r
<port range> parameter. The port range is shown as follows:
-r:<base_port,end_port>
where the base_port is the first port that can be used by FCM, and the end_port is
the last port in a range of port numbers that can be used by FCM.
Related concepts:
v “Authentication considerations for remote clients” on page 495
v “Authentication methods for your server” on page 490
Related reference:
v “db2icrt - Create instance command” in Command Reference
Examples:
v To add an instance for a DB2 server, you can use the following command:
db2icrt -u db2fenc1 db2inst1
v If you installed the DB2 Connect™ Enterprise Server Edition only, you can use
the instance name as the Fenced ID also:
db2icrt -u db2inst1 db2inst1
v To add an instance for a DB2 client, you can use the following command:
db2icrt db2inst1 –s client –u fencedID
DB2 client instances are created when you want a workstation to connect to other
database servers and you have no need for a local database on that workstation.
Related reference:
v “db2icrt - Create instance command” in Command Reference
The following example could be used, on DB2 Enterprise Server Edition for
Windows:
Note: If you change the service account; that is, if you no longer use the default
service created when the first instance was created during product
installation, then you must grant the domain/user account name used to
create the instance the following advanced rights:
v Act as a part of the operating system
v Create a token object
v Increase quota
v Log on as a service
v Replace a process level token
v Lock page in memory
The instance requires these user rights to access the shared drive,
authenticate the user account, and run DB2 as a Windows service. The
“Lock page in memory” right is needed for Address Windowing Extensions
(AWE) support.
Related reference:
v “db2icrt - Create instance command” in Command Reference
Adding instances
Once you have created an additional instance, you will need to add a record of
that instance within the Control Center to be able to work with that instance from
the Control Center.
Procedure:
1. Expand the object tree until you find the Instances folder of the system that you want.
2. Right-click the instance folder, and select Add from the pop-up menu.
3. Complete the information, and click Apply.
Related concepts:
v “Instance creation” on page 34
Related tasks:
v “Listing instances” on page 41
Listing instances
Use the Control Center or the db2ilist command to get a list of instances, as
follows:
v On Version 8 or earlier, all the instances on the system are listed.
v On Version 9 or later, only the instances from the DB2 copy where the db2ilist
command is invoked from are listed.
1. Expand the object tree until you see the Instances folder.
2. Right-click the Instances folder, and select Add from the pop-up menu.
3. On the Add Instance window, click Refresh.
4. Click the drop-down arrow to see a list of database instances.
To determine which instance applies to the current session (on supported Windows
platforms) use:
set db2instance
Related reference:
v “db2ilist - List instances command” in Command Reference
Auto-starting instances
Procedure:
On Windows operating systems, the DB2 database instance that is created during
install is set as auto-started by default. An instance created using db2icrt is set as a
manual start. To change the start type, you need to go to the Services panel and
change the property of the DB2 service there.
Related concepts:
v “Instance creation” on page 34
Related reference:
v “db2iauto - Auto-start instance command” in Command Reference
Prerequisites:
Procedure:
1. Open the Quiesce window: Expand the object tree until you find the instance that you
want to quiesce. Right-click the instance and select Quiesce from the pop-up menu. The
Quiesce window opens.
2. Specify whether you want to allow a user or a group to access the instance. If you are
allowing a user to attach to the instance, use the User controls to specify a specific user.
If you are allowing a group to attach to the instance, use the Group controls to specify
a specific group.
When you click OK, the Quiesce window closes and the instance is quiesced. Only the
specified user or group will be able to attach to the quiesced instance until either the
instance is unquiesced or stopped.
1. Expand the object tree until you find the instance that you want to unquiesce.
2. Right-click the instance and select Unquiesce from the pop-up menu. The instance will
be unquiesced immediately.
Related reference:
v “QUIESCE command” in Command Reference
v “UNQUIESCE command” in Command Reference
Procedure:
Add one of the following statements to the .profile or .login script files:
v For users who share one version of the script, add:
. INSTHOME/sqllib/db2profile (for Bourne or Korn shell)
source INSTHOME/sqllib/db2cshrc (for C shell)
where INSTHOME is the home directory of the instance that you want to use.
v For users who have a customized version of the script in their home directory,
add:
. USERHOME/db2profile (for Bourne or Korn shell)
source USERHOME/db2cshrc (in C shell)
Related tasks:
To choose which instance you want to use, enter one of the following statements at
a command prompt. The period (.) and the space are required.
v For users who share one version of the script, add:
. INSTHOME/sqllib/db2profile (for Bourne or Korn shell)
source INSTHOME/sqllib/db2cshrc (for C shell)
where INSTHOME is the home directory of the instance that you want to use.
v For users who have a customized version of the script in their home directory,
add:
. USERHOME/db2profile (for Bourne or Korn shell)
source USERHOME/db2cshrc (in C shell)
If you want to work with more than one instance at the same time, run the script
for each instance that you want to use in separate windows. For example, assume
that you have two instances called test and prod, and their home directories are
/u/test and /u/prod.
In window 1:
v In Bourne or Korn shell, enter:
. /u/test/sqllib/db2profile
v In C shell, enter:
source /u/test/sqllib/db2cshrc
In window 2:
v In Bourne or Korn shell, enter:
. /u/prod/sqllib/db2profile
v In C shell, enter:
source /u/prod/sqllib/db2cshrc
Use window 1 to work with the test instance and window 2 to work with the
prod instance.
Note: Enter the which db2 command to ensure that your search path has been set
up correctly. This command returns the absolute path of the CLP executable.
Verify that it is located under the instance’s sqllib directory.
Related tasks:
v “Setting the DB2 environment automatically on UNIX” on page 43
The automatic client reroute feature could be used within the following
configurable environments:
1. Enterprise Server Edition (ESE) with the database partitioning feature (DPF)
2. DataPropagator™ (DPROPR)-style replication
3. High availability cluster multiprocessor (HACMP™)
4. High availability disaster recovery (HADR).
Automatic client reroute works in conjunction with HADR to allow a client
application to continue its work with minimal interruption after a failover of
the database being accessed.
In the case of the DB2 Connect server, because there is no requirement for the
synchronization of local databases, you only need to ensure that both the original
and alternate DB2 Connect servers have the target host or iSeries™ database
catalogued in such a way that it is accessible using an identical database alias.
For example, assume a database is located at the database partition called “N1”
(with a hostname of XXX and a port number YYY). The database administrator
would like to set the alternate server location to be at the hostname = AAA with a
port number of 123. Here is the command the database administrator would run at
database partition N1 (on the server instance):
db2 update alternate server for database db2 using hostname AAA port 123
After you have specified the alternate server location on a particular database at
the server instance, the alternate server location information is returned to the
client as part of the connection process. If communication between the client and
the server is lost for any reason, the DB2 client coded will attempt to re-establish
the connection by using the alternate server information. The DB2 client will
attempt to re-connect with the original server and the alternate server, alternating
the attempts between the two servers. The timing of these attempts varies from
very rapid attempts to begin with gradual lengthening of the intervals between the
attempts.
Consider the following two items involving alternate server connectivity with DB2
Connect server:
v The first consideration involves using DB2 Connect server for providing access
to a host or iSeries database on behalf of both remote and local clients. In such
situations, confusion can arise regarding alternate server connectivity
information in a system database directory entry. To minimize this confusion,
consider cataloging two entries in the system database directory to represent the
same host or iSeries database. Catalog one entry for remote clients and catalog
another for local clients.
v Secondly, the alternate server information that is returned from a target server is
kept only in cache. If the DB2 process is terminated, the cache information,
therefore the alternate server information, is lost.
Related concepts:
v “Automatic client reroute limitations” on page 47
Related reference:
v “Automatic client reroute examples” on page 49
v “Automatic client reroute roadmap” on page 45
Note: If the client is using CLI, JCC Type 2 or Type 4 drivers, after the
connection is re-established, then for those SQL and XQuery statements
that have been prepared against the original server, they are implicitly
re-prepared with the new server. However, for embedded SQL routines
(for example, SQC or SQX applications), they will not be re-prepared.
v Do not run high availability disaster recovery (HADR) commands on client
reroute-enabled database aliases. HADR commands are implemented to identify
the target database using database aliases. Consequently, if the target database
has an alternative database defined, it is difficult for HADR commands to
determine the database on which the command is actually operating. While a
client might need to connect using a client reroute-enabled alias, HADR
An alternate way to implement automatic client rerouting is to use the DNS entry
to specify an alternate IP address for a DNS entry. The idea is to specify a second
IP address (an alternate server location) in the DNS entry; the client would not
know about an alternate server, but at connect time DB2 database system would
alternate between the IP addresses for the DNS entry.
Related tasks:
v “Specifying a server for automatic client reroute” on page 49
Related reference:
v “Automatic client reroute roadmap” on page 45
v “UPDATE ALTERNATE SERVER FOR DATABASE command” in Command
Reference
v “UPDATE ALTERNATE SERVER FOR LDAP DATABASE command” in
Command Reference
Procedure:
To define a new or alternate server, use the UPDATE ALTERNATE SERVER FOR
DATABASE or UPDATE ALTERNATE SERVER FOR LDAP
DATABASEcommand. These commands update the alternate server information
for a database alias in the system database directory.
Related concepts:
v “Automatic client reroute description and setup” on page 45
Related reference:
v “UPDATE ALTERNATE SERVER FOR DATABASE command” in Command
Reference
v “UPDATE ALTERNATE SERVER FOR LDAP DATABASE command” in
Command Reference
v “Automatic client reroute roadmap” on page 45
if (sqlca–>sqlcode == -30108)
{
// connection is re-established, re-execute the failed transaction
if (checkpoint == 0)
{
goto checkpt0;
}
else if (checkpoint == 1)
{
goto checkpt1;
}
else if (checkpoint == 2)
{
goto checkpt2;
}
....
exit;
}
}
}
main()
{
connect to mydb;
check_sqlca("connect failed", &sqlca);
checkpt0:
EXEC SQL set current schema XXX;
check_sqlca("set current schema XXX failed", &sqlca);
if (sqlca.sqlcode == 0)
{
checkpoint = 1;
}
checkpt1:
EXEC SQL set current schema YYY;
check_sqlca("set current schema YYY failed", &sqlca);
if (sqlca.sqlcode == 0)
{
checkpoint = 2;
}
...
}
At the server “hornet” (hostname equals hornet with a port number), a database
“mydb” is created. Furthermore, the database “mydb” is also created at the
alternate server (hostname “montero” with port number 456). You will also need to
update the alternate server for database “mydb” at server “hornet” as follows:
db2 update alternate server for database mydb using hostname montero port 456
In the sample application above, and without having the automatic client reroute
feature set up, if there is a communication error in the create table t1 statement,
the application will be terminated. With the automatic client reroute feature set up,
the DB2 database system will try to establish the connection to host “hornet” (with
port 456) again. If it is still not working, the DB2 database system will try the
alternate server location (host “montero” with port 456). Assuming there is no
communication error on the connection to the alternate server location, the
application can then continue to run subsequent statements (and to re-run the
failed transaction).
At the server “hornet” (hostname equals hornet with a port number), primary
database “mydb” is created. A standby database is also created at host “montero”
with port 456. Information on how to setup HADR for both a primary and standby
database is found in Data Recovery and High Availability Guide and Reference. You
will also need to update the alternate server for database “mydb” as follows:
db2 update alternate server for database mydb using hostname montero port 456
In the sample application above, and without having the automatic client reroute
feature set up, if there is a communication error in the create table t1 statement,
the application will be terminated. With the automatic client reroute feature set up,
the DB2 database system will try to establish the connection to host “hornet” (with
port 456) again. If it is still not working, the DB2 database system will try the
alternate server location (host “montero” with port 456). Assuming there is no
communication error on the connection to the alternate server location, the
application can then continue to run subsequent statements (and to re-run the
failed transaction).
Related concepts:
v “Automatic client reroute description and setup” on page 45
Related tasks:
v “Specifying a server for automatic client reroute” on page 49
Related reference:
v “Automatic client reroute roadmap” on page 45
Note:
Users of Type 4 connectivity with the DB2 Universal JDBC Driver should
use the following two datasource properties to configure automatic client
rerouting:
v maxRetriesForClientReroute: Use this property to limit the number of
retries if the primary connection to the server fails. This property is only
used if the retryIntervalClientReroute property is also set.
v retryIntervalForClientReroute: Use this property to specify the amount of
time (in seconds) to sleep before retrying again. This property is only
used if the maxRetriesForClientReroute property is also set.
Related reference:
v “Automatic client reroute roadmap” on page 45
If client reroute is enabled, you need to set the connection timeout value to a value
that is equal to or greater than the maximum time it takes to connect to the server.
Otherwise, the connection might timeout and be rerouted to the alternate server by
client reroute. For example, if on a normal day it takes about 10 seconds to connect
to the server, and on a busy day it takes about 20 seconds, the connection timeout
value should be set to at least 20 seconds.
Related concepts:
v “Client reroute” in Administration Guide: Planning
Distributor considerations
When a client to server connection fails, the client’s requests for reconnection are
distributed to a defined set of systems by a distributor or dispatcher, such as
WebSphere® EdgeServer.
Client —> distributor technology —> (DB2 Connect Server 1 or DB2 Connect
Server 2) —> DB2 z/OS
where:
v The distributor technology component has a TCP/IP host name of DThostname
v The DB2 Connect Server 1 has a TCP/IP host name of GWYhostname1
v The DB2 Connect Server 2 has a TCP/IP host name of GWYhostname2
v The DB2 z/OS server has a TCP/IP host name of zOShostname
For example, assume the distributor chooses GWYhostname2. This produces the
following environment:
The distributor does not retry any of the connections if there is any communication
failure. If you want to enable the automatic client reroute feature for a database in
such an environment, the alternative server for the associated database or
databases in the DB2 Connect server (DB2 Connect Server 1 or DB2 Connect Server
2) should be set up to be the distributor (DThostname). Then, if DB2 Connect
Server 1 locks up for any reason, automatic client rerouting is triggered and a
client connection is retried with the distributor as both the primary and the
alternate server. This option allows you to combine and maintain the distributor
capabilities with the DB2 automatic client reroute feature. Setting the alternate
server to a host other than the distributor host name still provides the clients with
the automatic client reroute feature. However, the clients will establish direct
connections to the defined alternate server and bypass the distributor technology,
which eliminates the distributor and the value that it brings.
The automatic client reroute feature intercepts the following SQL codes:
v sqlcode -20157
v sqlcode -1768 (reason code = 7)
Related reference:
v “Automatic client reroute roadmap” on page 45
The best way to avoid this is to set up an application to retrieve the alternate
server information. By using the javax.sql.DataSource interface, alternate server
parameters can be picked up by the JCC application and kept in non-volatile
storage on the client machine. The storage can be done using the JNDI API. If, for
instance, a local file system is specified as the non-volatile storage, JNDI will create
a .bindings file which will contain the required alternate server information. After
the current JVM is shut down, the information will then persist in that file until a
new JVM is created. The new JVM will attempt to connect to the server. If the
alternate server information has been updated, this will be updated on the client
machine without requiring your intervention. If the server is missing however, the
.binding file will be read and a new connection attempt will be made at the
location of the alternate server. LDAP can also be used to provide non-volatile
storage for the alternate server information. Using volatile storage is not
recommended, as a client machine failure could result in the loss of alternate
server data stored in memory.
Related concepts:
v “Automatic client reroute description and setup” on page 45
v “Automatic client reroute limitations” on page 47
Related reference:
v “Automatic client reroute roadmap” on page 45
Automatic storage
This section contains information about automatic storage databases and table
spaces.
DB2 creates an automatic storage database by default. The command line processor
(CLP) provides a way to disable automatic storage during database creation by
explicitly using the AUTOMATIC STORAGE NO clause.
The following are some examples of automatic storage being disabled explicitly:
CREATE DATABASE ASNODB1 AUTOMATIC STORAGE NO
CREATE DATABASE ASNODB2 AUTOMATIC STORAGE NO ON X:
The following are some examples of automatic storage being enabled either
explicitly or implicitly:
CREATE DATABASE DB1
CREATE DATABASE DB2 AUTOMATIC STORAGE YES ON X:
CREATE DATABASE DB3 ON /data/path1, /data/path2
CREATE DATABASE DB4 ON D:\StoragePath DBPATH ON C:
Based on the syntax used, the DB2 database manager extracts the following two
pieces of information that pertain to storage locations:
v The database path (which is where DB2 stores various control files for the
database)
– If the DBPATH ON clause is specified, this clause indicates the database path.
– If the DBPATH ON clause is not specified, the first path listed in the ON
clause indicates the database path (in addition to it being a storage path).
– If neither the DBPATH ON nor the ON clauses are specified, the dftdbpath
database manager configuration parameter is used to determine the database
path.
v The storage paths (where DB2 creates automatic storage table space containers)
– If the ON clause is specified, all of the listed paths are storage paths.
– If the ON clause is not specified, there will be a single storage path that is set
to the value of the dftdbpath database manager configuration parameter.
For the examples shown previously, the following table summarizes the storage
paths used:
Table 3. Automatic storage database and storage paths.
CREATE DATABASE Command Database Path Storage Paths
CREATE DATABASE DB1 AUTOMATIC STORAGE YES <dftdbpath> <dftdbpath>
CREATE DATABASE DB2 AUTOMATIC STORAGE YES ON X: X: X:
CREATE DATABASE DB3 ON /data/path1, /data/path2 /data/path1 /data/path1,
/data/path2
CREATE DATABASE DB4 ON D:\StoragePath DBPATH ON C: C: D:\StoragePath
The storage paths provided must exist and be accessible. In a partitioned database
environment, the same storage paths will be used on each database partition and
they must exist and be accessible on each of those database partitions. There is no
way to specify a unique set of storage paths for a particular database partition
unless database partition expressions are used as part of the storage path name.
For example:
CREATE DATABASE TESTDB ON "/path1ForNode $N", "/path2ForNode $N"
When free space is calculated for a storage path for a given database partition, the
database manager will check for the existence of the following directories or mount
points within the storage path and will use the first one that is found:
<storage path>/<instance name>NODE####/<database name>
<storage path>/<instance name>NODE####
<storage path>/<instance name>
<storage path>
Where:
In doing this, file systems can be mounted at a point beneath the storage path and
the database manager will recognize that the actual amount of free space available
for table space containers might not be the same amount that is associated with the
storage path directory itself.
Consider the example where two logical database partitions exist on one physical
computer and there is a single storage path: /db2data
When creating containers on the storage path and determining free space, the
database manager will know not to retrieve free space information for /db2data,
but instead retrieve it for the corresponding /db2data/<instance>/NODE####
directory.
There are three default table spaces created whenever a database is created. If there
are no explicit table space definitions provided as part of the CREATE DATABASE
command, the table spaces are created as automatic storage table spaces.
After the database has been created, new storage paths can be added to the
database using the ADD STORAGE clause of the ALTER DATABASE statement.
For example:
ALTER DATABASE ADD STORAGE /data/path3, /data/path4
Related concepts:
v “Automatic storage table spaces” on page 58
v “How containers are added and extended in DMS table spaces” in Administration
Guide: Planning
v “Table space maps” in Administration Guide: Planning
Related tasks:
v “Adding an automatic storage path” on page 64
Related reference:
v “Restore database implications” on page 59
v “Restrictions when using automatic storage” on page 62
v “ALTER DATABASE PARTITION GROUP statement” in SQL Reference, Volume 2
v “Monitoring storage paths” on page 62
v “ADD DBPARTITIONNUM command” in Command Reference
v “CREATE DATABASE command” in Command Reference
v “RESTORE DATABASE command” in Command Reference
Related concepts:
v “Automatic storage databases” on page 54
Here are some example statements that create automatic storage table spaces:
CREATE TABLESPACE TS1
CREATE TABLESPACE TS2 MANAGED BY AUTOMATIC STORAGE
CREATE TEMPORARY TABLESPACE TEMPTS
CREATE USER TEMPORARY TABLESPACE USRTMP MANAGED BY AUTOMATIC STORAGE
CREATE LONG TABLESPACE LONGTS
Although automatic storage table spaces appear to be a different table space type,
it is really just an extension of the existing SMS and DMS types. If the table space
being created is a REGULAR or LARGE table space, it is created as a DMS with
file containers. If the table space being created is a USER or SYSTEM TEMPORARY
table space, it is created as a SMS with directory containers.
Note: This behavior might change in future versions of the DB2 database manager.
The names associated with these containers have the following format:
<storage path>/<instance>/NODE####
/T#######
/C#######.<EXT>
where:
<storage path> A storage path associated with the database
<instance> The instance under which the database was created
NODE#### The database partition number (NODE0000 for
example)
T####### The table space ID (for example, T0000003)
C####### The container ID (for example, C0000012)
<EXT> An extension based on the type of data being
stored:
Related concepts:
v “Automatic storage databases” on page 54
v “Temporary automatic storage table spaces” on page 57
Related tasks:
v “Viewing health alert objects” on page 447
Related reference:
v “Regular and large automatic storage table spaces” on page 63
v “Restrictions when using automatic storage” on page 62
Like the CREATE DATABASE command, the DB2 database manager extracts the
following two pieces of information that pertain to storage locations:
v The database path (which is where the DB2 database manager stores various
control files for the database)
– If the TO clause or the DBPATH ON clause is specified, the clause indicates
the database path.
– If the ON clause is used but the DBPATH ON clause is not specified with it,
the first path listed in the ON clause is used as the database path (in addition
to it being a storage path).
– If none of the TO, ON, or DBPATH ON clauses are specified, the dftdbpath
database manager configuration parameter determines the database path.
Note: If a database with the same name exists on disk, the database path is
ignored, and the database is placed into the same location as the existing
database.
v The storage paths (where DB2 creates automatic storage table space containers)
– If the ON clause is specified, all of the paths listed are considered storage
paths, and these paths are used instead of the ones stored within the backup
image.
– If the ON clause is not specified, no change is made to the storage paths (the
storage paths stored within the backup image are maintained).
For those cases where storage paths have been redefined as part of the restore
operation, the table spaces that are defined to use automatic storage are
automatically redirected to the new paths. However, you cannot explicitly redirect
containers associated with automatic storage table spaces using the SET
TABLESPACE CONTAINERS command; this action is not permitted.
Use the -s option of the db2ckbkp command to show whether or not automatic
storage is enabled for a database within a backup image. The storage paths
associated with the database are displayed if automatic storage is enabled.
Related tasks:
v “Adding an automatic storage path” on page 64
Related reference:
v “Regular and large automatic storage table spaces” on page 63
v “Restrictions when using automatic storage” on page 62
If the bufferpool monitor switch is on, the following elements are also set:
File system ID = 12345
File system free space (bytes) = 20000000000
File system used space (bytes) = 40000000000000
File system total space (bytes) = 40020000000000
This data is displayed on a per path basis: on a single database partition system
per path, and per each database partition on a multi-database partitioned
environment.
Related concepts:
v “Automatic storage table spaces” on page 58
v “Automatic storage databases” on page 54
v “Temporary automatic storage table spaces” on page 57
Related reference:
v “Restore database implications” on page 59
v “Regular and large automatic storage table spaces” on page 63
v “Restrictions when using automatic storage” on page 62
Related concepts:
v “Automatic storage table spaces” on page 58
v “Automatic storage databases” on page 54
v “Temporary automatic storage table spaces” on page 57
Related reference:
v “Regular and large automatic storage table spaces” on page 63
When a regular or large automatic storage table space is created, an initial size can
be specified as part of the CREATE TABLESPACE statement. For example:
CREATE TABLESPACE TS1 INITIALSIZE 100 M
If the initial size isn’t specified then DB2 will use a default value of 32 megabytes.
To create a table space with a given size, the DB2 database manager creates file
containers within the storage paths. If there is an uneven distribution of space
among the paths, containers might be created with different sizes. As a result, it is
important that all of the storage paths have a similar amount of free space on
them.
If automatic resizing is enabled for the table space, as space is used within it, the
DB2 database manager automatically extends existing containers and adds new
ones (using stripe sets). Whether containers are extended or added, no rebalance
ever takes place.
Related tasks:
v “Viewing health alert objects” on page 447
Related reference:
v “Monitoring storage paths” on page 62
v “Restore database implications” on page 59
v “Restrictions when using automatic storage” on page 62
When you add a storage path for a multiple-partition database environment, the
exact storage path must be replicated on each database partition. A path and its
associated folders must be created on each database partition. For this reason, the
new folder icon is unavailable when adding a storage path. If a specified path does
not exist on every database partition, the statement is rolled back.
Restrictions:
A database is enabled for automatic storage when it is created. You cannot enable
automatic storage for a database that was not originally defined as an automatic
storage database.
Procedure:
1. Expand the object tree until you see the Table Spaces folder of the database to which
you want to add a storage path. Right-click the Table Spaces folder and select Manage
Storage–>Add Automatic Storage from the pop-up menu. The Add Storage window
opens.
2. Click Add. The Add Storage Path window opens.
3. Specify the storage path.
To add a storage path to an existing database using the command line, use the
ALTER DATABASE statement.
Related concepts:
v “Automatic storage databases” on page 54
Related reference:
v “ALTER DATABASE statement” in SQL Reference, Volume 2
License management
The management of licenses for your DB2 products is done primarily through the
License Center within the Control Center of the online interface to the product.
When the Control Center cannot be used, the db2licm Licensed Management Tool
command performs basic license functions. With this command, you are able to
add, remove, list, and modify licenses and policies installed on your local system.
Related concepts:
v “Control Center overview” on page 376
v “License Center overview” on page 411
Related reference:
v “db2licm - License management tool command” in Command Reference
Prior to the introduction of the DB2 database profile registry, changing your
environment variables on Windows workstations (for example) required you to
change an environment variable and restart. Now, your environment is controlled,
with a few exceptions, by registry variables stored in the DB2 profile registries.
Users on UNIX operating systems with system administration (SYSADM) authority
for a given instance can update registry values for that instance. Windows users do
not need SYSADM authority to update registry variables. Use the db2set command
to update registry variables without restarting; this information is stored
immediately in the profile registries. The DB2 registry applies the updated
information to DB2 server instances and DB2 applications started after the changes
are made.
When updating the registry, changes do not affect the currently running DB2
applications or users. Applications started following the update use the new
values.
Note: There are DB2 environment variables DB2INSTANCE, and DB2NODE which might
not be stored in the DB2 profile registries. On some operating systems the
set command must be used in order to update these environment variables.
These changes are in effect until the next time the system is restarted. On
UNIX platforms, the export command might be used instead of the set
command.
Using the profile registry allows for centralized control of the environment
variables. Different levels of support are now provided through the different
profiles. Remote administration of the environment variables is also available when
using the DB2 Administration Server.
DB2 configures the operating environment by checking for registry values and
environment variables and resolving them in the following order:
1. Environment variables set with the set command. (Or the export command on
UNIX platforms.)
2. Registry values set with the instance node level profile (using the db2set -i
<instance name> <nodenum> command).
3. Registry values set with the instance level profile (using the db2set -i
command).
4. Registry values set with the global level profile (using the db2set -g command).
There are a couple of UNIX and Windows differences when working with a
partitioned database environment. These differences are shown in the following
example.
or
db2set FOO=BAR (’-i’ is implied)
the value of FOO will be visible to all nodes of the current instance (that is, “red”,
“white”, and “blue”).
On UNIX platforms, the instance level profile registry is stored in a text file inside
the sqllib directory. In partitioned database environments, the sqllib directory is
located on the filesystem shared by all physical database partitions.
On Windows platforms, if the user performs the same command from “red”, the
value of FOO will only be visible on “red” of the current instance. The DB2
database manager stores the instance level profile registry inside the Windows
registry. There is no sharing across physical database partitions. To set the registry
variables on all the physical computers, use the “rah” command as follows:
rah db2set -i FOO=BAR
Using the example shown above, and assuming that “red” is the owning computer,
then one would set DB2REMOTEPREG on “white” and “blue” computers to share
the registry variables on “red” by doing the following:
(on red) do nothing
(on white and blue) db2set DB2REMOTEPREG=\\red
When the DB2 database manager reads the registry variables on Windows, it first
reads the DB2REMOTEPREG value. If DB2REMOTEPREG is set, it then opens the
registry on the remote computer whose computer name is specified in the
DB2REMOTEPREG variable. Subsequent reading and updating of the registry
variables will be redirected to the specified remote computer.
Accessing the remote registry requires that the Remote Registry Service is running
on the target computer. Also, the user logon account and all DB2 service logon
accounts have sufficient access to the remote registry. Therefore, to use
DB2REMOTEPREG, you should operate in a Windows domain environment so
that the required registry access can be granted to the domain account.
There are Microsoft Cluster Server (MSCS) considerations. You should not use
DB2REMOTEPREG in an MSCS environment. When running in an MSCS
configuration where all computers belong to the same MSCS cluster, the registry
variables are maintained in the cluster registry. Therefore, they are already shared
between all computers in the same MSCS cluster and there is no need to use
DB2REMOTEPREG in this case.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
Related tasks:
v “Declaring, showing, changing, resetting, and deleting registry and environment
variables” on page 68
When you run commands to start or stop an instance’s database manager, DB2
applies the command to the current instance. DB2 determines the current instance
as follows:
Related tasks:
v “Declaring, showing, changing, resetting, and deleting registry and environment
variables” on page 68
The db2set command supports the local declaration of the registry and
environment variables.
Procedure:
To list all defined registry variables for the current or default instance, use:
db2set
To show the value of a registry variable in the current or default instance, use:
db2set registry_variable_name
If you use the Lightweight Directory Access Protocol (LDAP), you can set registry
variables in LDAP using:
v To set registry variables at the user level within LDAP, use:
db2set -ul
v To set registry variables at the global level within LDAP, use:
db2set -gl user_name
When running in an LDAP environment, you can set a DB2 registry variable value
so that its scope is global to all servers and all users that belong to a directory
partition or to a Windows domain. Currently, there are only two DB2 registry
variables that can be set at the LDAP global level:
DB2LDAP_KEEP_CONNECTION and DB2LDAP_SEARCH_SCOPE.
For example, to set the search scope value at the global level in LDAP, use:
db2set -gl db2ldap_search_scope = value
To reset a registry variable for an instance back to the default found in the Global
Profile Registry, use:
db2set -r registry_variable_name
To delete a variable’s value at a specified level, you can use the same command
syntax to set the variable but specify nothing for the variable value. For example,
to delete the variable’s setting at the database partition level, enter:
db2set registry_variable_name= -i instance_name
database_partition_number
To delete a variable’s value and to restrict its use, if it is defined at a higher profile
level, enter:
db2set registry_variable_name= -null instance_name
This command deletes the setting for the parameter you specify and restricts high
level profiles from changing this variable’s value (in this case, DB2 global-level
profile). However, the variable you specify could still be set by a lower level
profile (in this case, the DB2 database partition-level profile).
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
Related tasks:
v “Searching the LDAP servers” on page 585
v “Setting DB2 registry variables at the user level in the LDAP environment” on
page 587
v “Setting environment variables on UNIX systems” on page 79
v “Setting environment variables on Windows” on page 77
Note: Because Windows does not report a Unicode code page (in the
Windows regional settings) instead of the ANSII code page, a
Windows application will not behave as a Unicode client. To
override this behavior, set the DB2CODEPAGE registry variable to
1208 (for the Unicode code page) to cause the application to
behave as a Unicode application.
DB2_COLLECT_TS_REC_INFO
v Operating system: All
v Default=ON, Values: ON or OFF
v This variable specifies whether DB2 will process all log files when
rolling forward a table space, regardless of whether the log files contain
log records that affect the table space. To skip the log files known not to
contain any log records affecting the table space, set this variable to
″ON″. DB2_COLLECT_TS_REC_INFO must be set before the log files are
created and used so that the information required for skipping log files
is collected.
DB2_CONNRETRIES_INTERVAL
v Operating system: All
v Default= not set, Values: an integer number of seconds
v This variable specifies the sleep time between consecutive connection
retries, in seconds, for the automatic client reroute feature. You can use
this variable in conjunction with DB2_MAX_CLIENT CONNRETRIES to
configure the retry behavior for automatic client reroute.
If DB2_MAX_CLIENT_CONNRETRIES is set, but
DB2_CONNRETRIES_INTERVAL is not,
DB2_CONNRETRIES_INTERVAL defaults to 30. If
DB2_MAX_CLIENT_CONNRETRIES is not set, but
DB2_CONNRETRIES_INTERVAL is set,
DB2_MAX_CLIENT_CONNRETRIES defaults to 10. If neither
DB2_MAX_CLIENT_CONNRETRIES nor
DB2_CONNRETRIES_INTERVAL is set, the automatic client reroute
feature reverts to it’s default behavior of retrying the connection to a
database repeatedly for up to 10 minutes.
DB2CONSOLECP
v Operating system: Windows
v Default= null, Values: all valid code page values
v Specifies the codepage for displaying DB2 message text. When specified,
this value overrides the operating system codepage setting.
DB2COUNTRY
v Operating system: Windows
v Default=null, Values: all valid numeric country, territory, or region codes
Note: On Linux platforms, the default core file size limit is set to to 0
(that is, ulimit -c). With this setting, core files are not generated.
To allow core files to be created on Linux platforms, set the value
to unlimited.
Core files contain the entire process image of the terminating DB2
process. Consideration should be given to the available file system space
as core files can be quite large. The size is dependent on the DB2
configuration and the state of the process at the time the problem occurs.
DB2_FORCE_APP_ON_MAX_LOG
v Operating system: All
v Default: TRUE, Values: TRUE, FALSE
v Specifies what happens when the MAX_LOG configuration parameter
value is exceeded. If set to TRUE, the application is forced off the
database and the unit of work is rolled back.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
When you have set DB2_WORKLOAD=SAP, the user table space SYSTOOLSPACE
and the user temporary table space SYSTOOLSTMPSPACE are not automatically
created. These table spaces are used for tables created automatically by the
following wizards, utilities, or functions:
v Automatic maintenance
v Design advisor
v Control Center database information panel
v SYSINSTALLOBJECTS stored procedure, if the table space input parameter is
not specified
v GET_DBSIZE_INFO stored procedure
After completing at least one of these choices, create a user temporary table space
(also on the catalog node only, if using the Database Partition Feature (DPF). For
example:
Once the table space SYSTOOLSPACE and the temporary table space
SYSTOOLSTMPSPACE are created, you can use the wizards, utilities, or functions
mentioned earlier.
When you explicitly set a registry variable which is then overridden by using an
aggregate registry variable, a warning is issued. This warning tells you that the
explicit value is maintained. If the aggregate registry variable is used first and then
you specify an explicit registry variable, a warning is not given.
None of the registry variables that are configured through setting an aggregate
registry variable are shown unless you explicitly make that request for each
variable. When you query the aggregate registry variable, only the value assigned
to that variable is shown. Most of your users should not care about the values for
each individual variable.
The following example shows the interaction between using the aggregate registry
variable and explicitly setting a registry variable. For example, you might have set
the DB2_WORKLOAD aggregate registry variable to SAP and have overridden the
DB2_SKIPDELETED registry variable to NO. By entering db2set, you would
receive the following results:
DB2_WORKLOAD=SAP
DB2_SKIPDELETED=NO
In another situation, you might have set DB2ENVLIST, set the DB2_WORKLOAD
aggregate registry variable to SAP, and overridden the DB2_SKIPDELETED
registry variable to NO. (This assumes that the DB2_SKIPDELETED registry
variable is part of the group making up the SAP environment.) In addition, those
registry variables that were configured automatically through setting the aggregate
registry variable will show the name of the aggregate displayed within square
brackets, adjacent to its value. The DB2_SKIPDELETED registry variable will show
a “NO” value and will show “[O]” displayed adjacent to its value.
You might need to see the values for each registry variable that is a member of the
DB2_WORKLOAD aggregate registry variable. Before setting the
DB2_WORKLOAD aggregate registry variable to SAP, and assuming that no
registry variables that are included in the group are explicitly defined, you might
want to see the values that would be used if you configured the
DB2_WORKLOAD aggregate registry variable to SAP. To find the values that
would be used if DB2_WORKLOAD=SAP, run db2set -gd DB2_WORKLOAD=SAP. This
returns a list of registry variables and their values.
Related concepts:
v “Environment variables and the profile registry” on page 65
v “DB2 registry and environment variables” in Performance Guide
Related tasks:
v “Declaring, showing, changing, resetting, and deleting registry and environment
variables” on page 68
v “Setting DB2 registry variables at the user level in the LDAP environment” on
page 587
Related reference:
v “General registry variables” on page 70
DB2 Enterprise Server Edition servers on Windows have two system environment
variables, DB2INSTANCE and DB2NODE, that can only be set outside the profile
registry. You are not required to set DB2INSTANCE. The DB2 profile registry
variable DB2INSTDEF might be set in the global level profile to specify the
instance name to use if DB2INSTANCE is not defined.
Procedure:
To determine the settings of an environment variable, use the echo command. For
example, to check the value of the DB2PATH environment variable, enter:
echo %db2path%
You can set the DB2 environment variables DB2INSTANCE and DB2NODE as
follows (using DB2INSTANCE in this description):
Note: The environment variable DB2INSTANCE can also be set at the session
(process) level. For example, if you want to start a second DB2 instance
called TEST, issue the following commands in a command window:
set DB2INSTANCE=TEST
db2start
Note: The instance_name and the node_number are specific to the database
partition you are working with.
v There is no DB2 Instance Profile Registry required. For each of the DB2 instances
in the system, a key is created in the path:
\HKEY_LOCAL_computer\SOFTWARE\IBM\DB2\PROFILES\instance_name
The list of instances can be obtained by counting the keys under the PROFILES
key.
Related concepts:
v “DB2 Administration Server” on page 91
Related tasks:
v “Setting environment variables on UNIX systems” on page 79
The scripts db2profile (for Korn shell) and db2cshrc (for Bourne shell or C shell)
are provided as examples to help you set up the database environment. You can
find these files in insthome/sqllib, where insthome is the home directory of the
instance owner.
Note: Except for PATH and DB2INSTANCE, all other supported variables must be
set in the DB2 profile registry. To set variables that are not supported by the
DB2 database manager, define them in your script files, userprofile and
usercshrc.
An instance owner or SYSADM user might customize these scripts for all users of
an instance. Alternatively, users can copy and customize a script, then invoke a
script directly or add it to their .profile or .login files.
Procedure:
To change the environment variable for the current session, issue commands
similar to the following:
v For Korn shell:
DB2INSTANCE=inst1
export DB2INSTANCE
v For Bourne shell:
export DB2INSTANCE=<inst1>
v For C shell:
setenv DB2INSTANCE <inst1>
In order for the DB2 profile registry to be administered properly, the following file
ownership rules must be followed on UNIX operating systems.
v The DB2 Instance Level Profile Registry file is located under:
INSTHOME/sqllib/profile.env
The access permissions and ownership of this file should be:
-rw-rw-r-- <db2inst1> <db2iadm1> profile.env
where <db2inst1> is the instance owner, and <db2iadm1> is the instance owner’s
group.
The INSTHOME is the home path of the instance owner.
v The DB2 Global Level Profile Registry is located under:
– <installation path>/default.env for all UNIX and LINUX platforms.
The access permissions and ownership of this file should be:
-rw-rw-r-- root <group> default.env
Related concepts:
v “DB2 Administration Server” on page 91
Related tasks:
v “Setting environment variables on Windows” on page 77
You should not manually change the parameters in the configuration file. You
should only use the supported interface.
Performance Tip: Many of the configuration parameters come with default values,
but might need to be updated to achieve optimal performance for your database.
For multi-partition databases: When you have a database that is distributed across
more than one database partition, the configuration file should be the same on all
database partitions. Consistency is required since the query compiler compiles
distributed SQL statements based on information in the local node configuration
file and creates an access plan to satisfy the needs of the SQL statement.
Maintaining different configuration files on database partitions could lead to
different access plans, depending on which database partition the statement is
prepared. Use db2_all to keep the configuration files synchronized across all
database partitions.
Related tasks:
v “Configuring DB2 with configuration parameters” in Performance Guide
Note: You should not create files or directories under the sqllib subdirectory
other than those created by the DB2 database manager to prevent the loss of
data if an instance is deleted. There are two exceptions. If your system
supports stored procedures, put the stored procedure applications in the
function subdirectory under the sqllib subdirectory. The other exception is
when user-defined functions (UDFs) have been created. UDF executables are
allowed in the same directory.
The file contains one line for each database partition that belongs to an instance.
Each line has the following format:
dbpartitionnum hostname [logical-port [netname]]
The following example shows a possible node configuration file for an RS/6000®
SP™ system on which SP2EN1 has multiple TCP/IP interfaces, two logical
partitions, and uses SP2SW1 as the DB2 database interface. It also shows the
database partition numbers starting at 1 (rather than at 0), and a gap in the
dbpartitionnum sequence:
Table 9. Database partition number example table.
dbpartitionnum hostname logical-port netname
1 SP2EN1.mach1.xxx.com 0 SP2SW1
2 SP2EN1.mach1.xxx.com 1 SP2SW1
4 SP2EN2.mach1.xxx.com 0
5 SP2EN3.mach1.xxx.com
You can update the db2nodes.cfg file using an editor of your choice. (The
exception is: an editor should not be used on Windows.) You must be careful,
however, to protect the integrity of the information in the file, as database
partitioning requires that the node configuration file is locked when you issue
db2start and unlocked after db2stop ends the database manager. The db2start
command can update the file, if necessary, when the file is locked. For example,
you can issue db2start with the RESTART option or the ADDNODE option.
Related concepts:
v “Guidelines for stored procedures” in Developing SQL and External Routines
Related reference:
v “CREATE DATABASE command” in Command Reference
v “db2nchg - Change database partition server configuration command” in
Command Reference
v “db2ncrt - Add database partition server to an instance command” in Command
Reference
v “db2ndrop - Drop database partition server from an instance command” in
Command Reference
v “db2start - Start DB2 command” in Command Reference
v “db2stop - Stop DB2 command” in Command Reference
v “DROP DATABASE command” in Command Reference
See Automatic features enabled by default for other DB2 features that are enabled
by default.
Prerequisites:
Procedure:
You can use available options for AUTOCONFIGURE to define values for several
configuration parameters and to determine the scope of the application of those
parameters. The scope can be NONE, meaning none of the values are applied; DB
ONLY, meaning only database configuration and buffer pool values are applied; or,
DB AND DBM, meaning all parameters and their values are applied.
Note: Even if the Configuration Advisor was automatically enabled for the
CREATE DATABASE request, if desired, you can still specify
AUTOCONFIGURE <options>, in particular for APPLY DB and APPLY
DBM, in order to apply the database manager configuration
recommendation values. If the Configuration Advisor was disabled for the
CREATE DATABASE request, then you can run it manually afterwards with
the given options.
Related concepts:
Related tasks:
v “Creating a database” on page 113
Related reference:
v “AUTOCONFIGURE command” in Command Reference
v “AUTOCONFIGURE command using the ADMIN_CMD procedure” in
Administrative SQL Routines and Views
The values suggested by the Configuration Advisor are relevant for only one
database per instance. If you want to use this advisor on more than one database,
each database should belong to a separate instance.
Prerequisites:
Procedure:
1. Expand the object tree until you find the database object for which you would like DB2
to provide configuration recommendations.
2. Right-click the database and select Configure Advisor from the pop-up menu. The
Configuration Advisor opens.
Detailed information is provided through the online help facility within the Control Center.
Related concepts:
v “Automatic features enabled by default” in Administration Guide: Planning
Related tasks:
v “Creating a database” on page 113
Related reference:
v “Configuration Advisor sample output” on page 85
v “AUTOCONFIGURE command” in Command Reference
v “db2AutoConfig API - Access the Configuration Advisor” in Administrative API
Reference
If you are unsure about a hint value, that is, the parameters passed to the
command, you can omit it and the default will be used. When using the
Advisor, you can pass up to 10 hints: MEM_PERCENT, WORKLOAD_TYPE,
If you agree with all of the recommendations, you can reissue the
AUTOCONFIGURE command, but specify that you want the recommended
values to be applied. Otherwise, you can update individual configuration
parameters using the UPDATE DATABASE MANAGER CONFIGURATION
command and the UPDATE DATABASE CONFIGURATION command.
Related tasks:
v “Generating recommendations for database configuration” on page 84
Related reference:
v “AUTOCONFIGURE command” in Command Reference
v “UPDATE DATABASE CONFIGURATION command” in Command Reference
v “UPDATE DATABASE MANAGER CONFIGURATION command” in Command
Reference
v “db2AutoConfig API - Access the Configuration Advisor” in Administrative API
Reference
Deletes and updates to the database history file can only be done through the
PRUNE or UPDATE HISTORY commands.
Prerequisites:
Restrictions:
Procedure:
To hide the syntax of the administrative view, you can create a view as follows:
CREATE VIEW LIST_HISTORY AS
SELECT * FROM TABLE(DB_HISTORY()) AS LIST_HISTORY
After creating this view, you can run queries against the view. For example:
SELECT * FROM LIST_HISTORY
or
SELECT dbpartitionnum FROM LIST_HISTORY
or
SELECT dbpartitionnum, start_time, seqnum, tabname, sqlstate
FROM LIST_HISTORY
The Table 13 table lists the columns and the column data types returned by the
LIST_HISTORY table function.
Table 13. Contents of the history table
Column name Data type
dbpartitionnum smallint
EID bigint
start_time char(14)
seqnum smallint
end_time varchar(14)
firstlog varchar(254)
lastlog varchar(254)
backup_id varchar(24)
tabschema varchar(128)
tabname varchar(128)
comment varchar(254)
cmd_text clob(2M)
num_tbsps integer
tbspnames clob(5M)
operation char(1)
operationtype char(1)
objecttype char(1)
Related reference:
v “DB_HISTORY administrative view – Retrieve history file information” in
Administrative SQL Routines and Views
Tool set
Systems
(with Windows and Unix) OS/390 and z/OS iSeries
DB2 DB2
Instances subsystems
Unix system
services (USS)
DB2 DB2
DB2
administration administration
administration
server server
server
Scheduler
DAS assists the Control Center and Configuration Assistant when working on the
following administration tasks:
v Enabling remote administration of DB2 database instances.
v Providing the facility for job management, including the ability to schedule the
running of both DB2 database manager and operating system command scripts.
These command scripts are user-defined.
v Defining the scheduling of jobs, viewing the results of completed jobs, and
performing other administrative tasks against jobs located either remotely or
locally to the DAS using the Task Center.
v Providing a means for discovering information about the configuration of DB2
instances, databases, and other DB2 administration servers in conjunction with
You can only have one DAS in a database server. If one is already created, you
need to drop it by issuing db2admin drop. DAS is configured during installation to
start when the operating system is booted.
DAS is used to perform remote tasks on the server system and the host system on
behalf of a client request from the Control Center, the Configuration Assistant, or
any of the other available tools.
The DAS is available on all supported Windows and UNIX platforms as well as
the zSeries® (OS/390® and z/OS only) platforms. The DAS on zSeries is used to
support the Control Center, Development Center, and Replication Center in
administrative tasks.
The DB2 administration server on zSeries (OS/390 and z/OS only), will be
packaged and delivered as part of the DB2 Management clients feature of the DB2
system. Products that need DAS, like the Control Center, Replication Center, and
Development Center, require the installation of the DAS function. For information
on the availablility of DAS on your operating system, contact your IBM
representative.
The DAS on Windows and UNIX includes a scheduler to run tasks (such as DB2
database and operating system command scripts) defined using the Task Center.
Task information such as the commands to be run; schedule, notification, and
completion actions associated with the task, and run results are stored in a set of
tables and views in a DB2 database called the Tools Catalog. The Tools Catalog is
created as part of the setup. It can also be created and activated through the
Control Center, or through the CLP using the CREATE TOOLS CATALOG
command.
Although a scheduler is not provided on zSeries (OS/390 and z/OS only), you can
use the Build JCL and Create JCL functions provided in the Control Center to
generate JCL that is saved in partitioned datasets to be run using your system
scheduler.
Related concepts:
v “DB2 administration server (DAS) configuration on Enterprise Server Edition
(ESE) systems” on page 106
v “DB2 administration server (DAS) first failure data capture (FFDC)” on page 111
v “Discovery of administration servers, instances, and databases” on page 107
v “Security considerations for the DB2 administration server (DAS) on Windows”
on page 102
Related tasks:
v “Configuring the DB2 administration server (DAS)” on page 95
v “Creating a DB2 administration server (DAS)” on page 93
v “DB2 administration server (DAS) Java virtual computer setup” on page 101
v “Discovering and hiding server instances and databases” on page 108
v “Listing the DB2 administration server (DAS)” on page 95
v “Notification and contact list setup and configuration” on page 100
Prerequisites:
To create a DAS, you must have root authority on UNIX platforms or using an
account that has the correct authorization to create a service.
Restrictions:
You can only have one DAS in a database server. If one is already created, you
need to drop it by issuing db2admin drop.
Procedure:
Related tasks:
v “Removing the DB2 administration server (DAS)” on page 103
Related reference:
v “db2admin - DB2 administration server command” in Command Reference
To manually start or stop the DAS, on Windows you must first log on to the
computer using an account or user ID that belongs to either Administrators, Server
Operators, or Power Users groups. To manually start or stop the DAS, on Unix the
account or user ID must be made part of the dasadm_group. The dasadm_group is
specified in the DAS configuration parameters.
Procedure:
To start or stop the DAS on Windows use the db2admin start or db2admin stop
commands.
When working with the DB2 database manager for any of the UNIX operating
systems, you must do the following:
v To start the DAS:
1. Log in as the DAS owner.
2. Run the start up script using one of the following:
. DASHOME/das/dasprofile (for Bourne or Korn shell)
source DASHOME/das/dascshrc (for C shell)
Note: The DAS is automatically started after each system restart. The default
startup behavior of the DAS can be altered using the dasauto command.
v To stop the DAS:
1. Log in as an account or user ID that is part of the dasadm_group.
2. Stop the DAS using the db2admin command as follows:
db2admin stop
Related reference:
v “db2admin - DB2 administration server command” in Command Reference
v “dasadm_group - DAS administration authority group name configuration
parameter” in Performance Guide
This command is also used to start or stop the DAS, create a new user and
password, drop a DAS, and establish or modify the user account associated with
the DAS.
Related reference:
v “db2admin - DB2 administration server command” in Command Reference
To see the current values for the DB2 administration server configuration
parameters relevant to the DAS, enter:
db2 get admin cfg
This will show you the current values that were given as defaults during the
installation of the product or those that were given during previous updates to the
configuration parameters.
In order to update the DAS configuration file using the Command Line Processor
(CLP) and the UPDATE ADMIN CONFIG command, you must use the CLP from
an instance that is at the same installed level as the DAS. To update individual
entries in the DAS configuration file, enter:
db2 update admin cfg using ...
In some cases, changes to the DAS configuration file become effective only after
they are loaded into memory (that is, when a db2admin stop is followed by a
db2admin start; or, in the case of a Windows platform, stopping and starting the
service). In other cases, the configuration parameters are configurable online (that
is, you do not have to restart the DAS for these to take affect).
Related tasks:
v “Configuring DB2 with configuration parameters” in Performance Guide
Prerequisites:
Procedure:
The goal is to set up and configure the tools catalog database and the DAS
scheduler.
The DB2 administration server Configuration process tells the Scheduler the
location of the tools catalog database, and whether or not the Scheduler should be
enabled. By default, when a tools catalog database is created, its corresponding
DAS configuration is updated. That is, the Scheduler is configured and ready to
use the new tools catalog; there is no need to restart the DAS.
The tools catalog database can be created on a server that is local or remote from
the Scheduler system. If the tools catalog is created on a remote server, it must be
cataloged at the scheduler tools catalog database instance (TOOLSCAT_INST). In
addition, the scheduler user ID must be set by using the command db2admin
setschedid, so that the scheduler can connect and authenticate with the remote
catalog. The full syntax for the db2admin command is found in the Command
Reference.
The DAS scheduler requires a Java™ virtual computer (JVM) to access the tools
catalog information. The JVM information is specified using the jdk_path DB2
administration server configuration parameter of the DAS.
The Control Center and Task Center access the tools catalog database directly from
the client. The tools catalog database therefore needs to be cataloged at the client
before the Control Center can make use of it. The Control Center provides the
means to automatically retrieve information about the tools catalog database and
create the necessary directory entries in the local node directory and database
directory. The only communication protocol supported for this automatic
cataloging is TCP/IP.
For example, if you have a job scheduled to run every Saturday, and the scheduler
is turned off on Friday and then restarted on Monday, the job scheduled for
Saturday is now a job that is scheduled in the past. If exec_exp_task is set to “Yes”,
your Saturday job runs when the scheduler is restarted.
Note: If the DAS is going to be created by db2admin create, make sure you use
the /USER and /PASSWORD options. The USER account is used by the
scheduler process. Without it, the scheduler will not be started properly.
The USER account should have SYSADM authority on the tools catalog
instance.
Related reference:
v “exec_exp_task - Execute expired tasks configuration parameter” in Performance
Guide
v “jdk_path - Software Developer's Kit for Java installation path DAS
configuration parameter” in Performance Guide
v “sched_enable - Scheduler mode configuration parameter” in Performance Guide
v “smtp_server - SMTP server configuration parameter” in Performance Guide
v “svcename - TCP/IP service name configuration parameter” in Performance Guide
v “toolscat_db - Tools catalog database configuration parameter” in Performance
Guide
v “toolscat_inst - Tools catalog database instance configuration parameter” in
Performance Guide
v “toolscat_schema - Tools catalog database schema configuration parameter” in
Performance Guide
Procedure:
There are two DAS configuration parameters used to enable notifications by the
scheduler or the health monitor.
The DAS configuration parameter smtp_server is used to identify the Simple Mail
Transfer Protocol (SMTP) server used by the scheduler to send e–mail and pager
notifications as part of task execution completion actions as defined through the
Task Center, or by the health monitor to send alert notifications using e–mail or
pager.
The DAS configuration parameter contact_host specifies the location where the
contact information used by the scheduler and health monitor for notification is
stored. The location is defined to be a DB2 administration server’s TCP/IP
hostname. Allowing contact_host to be located on a remote DAS provides support
for sharing a contact list across multiple DB2 administration servers. This should
be set for partitioned database environments to ensure a common contact list is
Related reference:
v “contact_host - Location of contact list configuration parameter” in Performance
Guide
v “smtp_server - SMTP server configuration parameter” in Performance Guide
The jdk_path configuration parameter specifies the directory under which the IBM
Software Developer’s Kit (SDK) for Java to be used for running DB2 administration
server functions is installed. The environment variables used by the Java
interpreter are computed from the value of this parameter.
The scheduler requires a Java virtual computer (JVM) in order to use the tools
catalog database. It is necessary to have this setup before the scheduler can be
successfully started.
There is no default value for this parameter when working with UNIX platforms.
You should specify a value for this parameter when you install the IBM Software
Developer’s Kit (SDK) for Java.
The IBM Software Developer’s Kit (SDK) for Java on Windows is installed under
%DB2PATH%\java\jdk (which is the default value for this parameter on Windows
platforms). This should already be specified to the DAS. You can verify the value
for jdk_path using:
db2 get admin cfg
This command displays the values of the DB2 administration server configuration
file where jdk_path is one of the configuration parameters. The parameter can be
set, if necessary, using:
db2 update admin cfg using jdk_path ’C:\Program Files\IBM\SQLLIB’
This assumes that the DB2 database manager is installed under ’C:\Program
Files\IBM\SQLLIB’.
The IBM Software Developer’s Kit (SDK) for Java on AIX is installed under
/usr/java130. The parameter can be set, if necessary, using:
db2 update admin cfg using jdk_path /usr/java130
Note: If you are creating or using a tools catalog against a 64-bit instance on one
of the platforms that supports both 32- and 64-bit instances (AIX, Sun, or
HP-UX) use the jdk_64_path configuration parameter instead of the
jdk_path parameter. This configuration parameter specifies the directory
under which the 64-bit version of the IBM Software Develop’s Kit (SDK) for
Java is installed.
Related reference:
v “jdk_path - Software Developer's Kit for Java installation path DAS
configuration parameter” in Performance Guide
Chapter 2. Creating and using the DB2 Administration Server (DAS) 101
v “GET ADMIN CONFIGURATION command” in Command Reference
v “UPDATE ADMIN CONFIGURATION command” in Command Reference
After creating the DAS, you can set or change the logon account using the
db2admin command as follows:
db2admin setid <username> <password>
where <username> and <password> are the username and password of an account
that has local Administrator authority. Before running this command, you must log
on to a computer using an account or user ID that has local Administrator
authority.
Note:
v Recall that passwords are case-sensitive. A mixture of upper and
lowercase is allowed which means that the case of the password becomes
very important.
v On Windows, you should not use the Services utility in the Control Panel
to change the logon account for the DAS since some of the required access
rights will not be set for the logon account. Always use the db2admin
command to set or change the logon account for the DB2 administration
server (DAS).
Related reference:
v “db2admin - DB2 administration server command” in Command Reference
You must first log on to the computer with superuser authority, usually as “root”.
Note: On Windows, updating the DAS is part of the installation process. There are
no user actions required.
Examples:
The DAS is running Version 8.1.2 code in the Version 8 install path. If FixPak 3 is
installed in the Version 8 install path, the following command, invoked from the
Version 8 install path, will update the DAS to FixPak 3:
dasupdt
The DAS is running Version 8.1.2 code in an alternate install path. If FixPak 1 is
installed in another alternate install path, the following command, invoked from
the FixPak 1 alternate install path, will update the DAS to FixPak 1, running from
the FixPak 1 alternate install path:
dasupdt -D
Related concepts:
v “DB2 Administration Server” on page 91
v “Security considerations for the DB2 administration server (DAS) on Windows”
on page 102
Chapter 2. Creating and using the DB2 Administration Server (DAS) 103
3. Stop the DAS using the db2admin command as follows:
db2admin stop
4. Back up (if needed) all the files in the das subdirectory under the home
directory of the DAS.
5. Log off.
6. Log in as root and remove the DAS using the dasdrop command as follows:
dasdrop
Note: The dasdrop command removes the das subdirectory under the home
directory of the DB2 administration server (DAS).
Related reference:
v “dasdrop - Remove a DB2 administration server command” in Command
Reference
v “db2admin - DB2 administration server command” in Command Reference
The directions given here are only applicable for a multi-partition database in an
ESE environment. If you are only running a single-partition database on an ESE
system, then the directions given are not applicable to your environment.
Procedure:
There are two aspects to configuration: That which is required for the DB2
administration server (DAS), and that which is recommended for the target,
administered DB2 database instance.
Example Environment
104 Administration Guide: Implementation
product/version:
DB2 UDB ESE V8.1
install path:
install_path
TCP services file:
services
DB2 Instance:
name: db2inst
owner ID:
db2inst
instance path:
instance_path
nodes: 3 nodes, db2nodes.cfg:
v 0 hostA 0 hostAswitch
v 1 hostA 1 hostAswitch
v 2 hostB 0 hostBswitch
DB name:
db2instDB
DAS:
name: db2as00
owner/user ID:
db2as
instance path:
das_path
install/run host:
hostA
internode communications port:
16000 (unused port for hostA and hostB)
Note: Substitute site-specific values for the fields shown above. For example, the
following table contains example pathnames for some sample supported
ESE platforms:
Table 14. Example Pathnames for Supported ESE Platforms
Paths DB2 ESE for AIX DB2 ESE for Solaris DB2 ESE for Windows
install_path /usr/opt/<v_r_ID> /opt/IBM/db2/<v_r_ID> C:\sqllib
instance_path /home/db2inst/sqllib /home/db2inst/sqllib C:\profiles\db2inst
das_path /home/db2as/das /home/db2as/das C:\profiles\db2as
tcp_services_file /etc/services /etc/services C:\winnt\system32
\drivers\etc\services
In the table, <v_r_ID> is the platform-specific version and release identifier. For
example in DB2 UDB ESE for AIX in Version 8, the <v_r_ID> is db2_08_01.
Chapter 2. Creating and using the DB2 Administration Server (DAS) 105
When installing DB2 Enterprise Server Edition, the setup program creates a DAS
on the instance-owning computer. The database partition server resides on the
same computer as the DAS and is the connection point for the instance. That is,
this database partition server is the coordinator partition for requests issued to the
instance from the Control Center or the Configuration Assistant.
If DAS is installed on each physical computer, then each computer can act as a
coordinator partition. Each physical computer appears as a separate DB2SYSTEM
in the Control Center or Configuration Assistant. If different clients use different
systems to connect to a partitioned database server, then this will distribute the
coordinator partition functionality and help to balance incoming connections.
Related concepts:
v “DB2 administration server (DAS) configuration on Enterprise Server Edition
(ESE) systems” on page 106
v “DB2 Administration Server” on page 91
For example, db2inst consists of three nodes distributed across two physical
computers or hosts. The minimum requirement can be fulfilled by running db2as
on hostA and hostB.
Notes:
1. The number of database partitions present on hostA does not have any bearing
on the number of DASes that can be run on that host. You can run only one
copy of the DAS on hostA regardless of the multiple logical nodes (MLN)
configuration for that host.
2. There is one DAS required on each computer, or physical node, which must be
created individually using the dascrt command. The DAS on each computer or
physical node must be running so that the Task Center and the Control Center
can work correctly. The ID db2as must exist on hostA and hostB. The home
directory of the db2as ID must not be cross-mounted between the two systems.
Alternatively, different user IDs can be used to create the DAS on hostA and
hostB.
On DB2 Enterprise Server Edition for Windows, if you are using the Configuration
Assistant or the Control Center to automate connection configuration to a DB2
server, the database partition server that is on the same computer as the DAS will
be the coordinator node. This means that all physical connections from the client to
the database will be directed to the coordinator node before being routed to other
database partition servers.
When working on DB2 Enterprise Server Edition for Windows, the DB2 Remote
Command Service (db2rcmd.exe) automatically handles internode administrative
communications.
The Control Center communicates with the DAS using the TCP service port 523.
This port is reserved for exclusive use by the DB2 database manager. Therefore, it
is not necessary to insert new entries into TCP services file.
Related tasks:
v “Creating a DB2 administration server (DAS)” on page 93
Related reference:
v “db2admin - DB2 administration server command” in Command Reference
The discovery service is integrated with the Configuration Assistant and the DB2
administration server. To configure a connection to a remote computer, the user
would logon to the client computer and run the Configuration Assistant (CA). The
CA sends a broadcast signal to all the computers on the network. Any computer
that has a DAS installed and configured for discovery will respond to the
broadcast signal from the CA by sending back a package that contains all the
instance and database information on that computer. The CA then uses the
information in this package to configure the client connectivity. Using the
discovery method, catalog information for a remote server can be automatically
generated in the local database and node directory.
The discovery method requires that you logon to every client computer and run
the CA. If you have an environment where there are a large number of clients, this
can be very difficult and time-consuming. An alternative, in this case, is to use a
directory service like LDAP.
Known Discovery allows you to discover instances and databases on systems that
are known to your client, and add new systems so that their instances and
databases can be discovered. Search Discovery provides all of the facilities of
Known Discovery and adds the option to allow your local network to be searched
for other DB2 database servers.
To have a system support Known Discovery, set the discover parameter in the DAS
configuration file to KNOWN. To have the system support both Known and Search
Discovery, set the discover parameter in the DAS configuration file to SEARCH (this is
the default). To prevent discovery of a system, and all of its instances and
databases, set this parameter to DISABLE. Setting the discover parameter to DISABLE
in the DAS configuration file, prevents discovery of the system.
Chapter 2. Creating and using the DB2 Administration Server (DAS) 107
Note: The TCP/IP host name returned to a client by Search Discovery is the same
host name that is returned by your DB2 server system when you enter the
hostname command. On the client, the IP address that this host name maps
to is determined by either the TCP/IP domain name server (DNS)
configured on your client computer or, if no DNS is configured, a mapping
entry in the client’s hosts file. If you have multiple adapter cards configured
on your DB2 server system, you must ensure that TCP/IP is configured on
the server to return the correct hostname, and that the DNS or local client’s
hosts file, maps the hostname to the IP address desired.
On the client, enabling Discovery is also done using the discover parameter;
however, in this case, the discover parameter is set in the client instance (or server
acting as a client) as follows:
v KNOWN
KNOWN discovery is used by the Configuration Assistant and Control Center to
retrieve instance and database information associated with systems that are
already known to your local system. New systems can be added using the Add
Systems functionality provided in the tools. When the discover parameter is set
to KNOWN, you will not be able to search the network.
v SEARCH
Enables all of the facilities of Known Discovery, and enables local network
searching. This means that any searching is limited to the local network.
The “Other Systems (Search the network)” icon only appears if this choice is
made. This is the default setting.
v DISABLE
Disables Discovery. In this case, the Search the network option is not available
in the “Add Database Wizard”.
Note: The discover parameter defaults to SEARCH on all client and server instances.
The discover parameter defaults to SEARCH on all DB2 administration servers
(DAS).
Related concepts:
v “Lightweight Directory Access Protocol (LDAP) directory service” on page 181
Related tasks:
v “Discovering and hiding server instances and databases” on page 108
v “Setting discovery parameters” on page 109
Procedure:
Note: If you want an instance to be discovered, discover must also be set to KNOWN
or SEARCH in the DAS configuration file. If you want a database to be
discovered, the discover_inst parameter must also be enabled in the server
instance.
Related reference:
v “discover_db - Discover database configuration parameter” in Performance Guide
v “discover_inst - Discover server instance configuration parameter” in Performance
Guide
Procedure:
Use the Control Center to set the discover_inst and discover_db parameters.
You can also use the Configuration Assistant to update configuration parameters.
Chapter 2. Creating and using the DB2 Administration Server (DAS) 109
Related reference:
v “UPDATE ADMIN CONFIGURATION command” in Command Reference
v “discover - DAS discovery mode configuration parameter” in Performance Guide
v “discover_db - Discover database configuration parameter” in Performance Guide
v “discover_inst - Discover server instance configuration parameter” in Performance
Guide
Restrictions:
A DAS must reside on each physical database partition. When a DAS is created on
the database partition, the DB2SYSTEM name is configured to the TCP/IP
hostname and the discover setting is defaulted to SEARCH.
Procedure:
DB2 Discovery is a feature that is used by the Configuration Assistant and Control
Center. Configuring for this feature might require that you update the DB2
administration server (DAS) configuration and an instance’s database manager
configuration to ensure that DB2 Discovery retrieves the correct information.
Related reference:
v “discover - DAS discovery mode configuration parameter” in Performance Guide
v “db2ilist - List instances command” in Command Reference
v “db2ncrt - Add database partition server to an instance command” in Command
Reference
Procedure:
The system names that are retrieved by Discovery are the systems on which a DB2
administration server (DAS) resides. Discovery uses these systems as coordinator
partitions when connections are established.
When there is more than one DAS present in a partitioned database environment,
the same instance might appear in more than one system on the Configuration
Assistant or Control Center’s interface; however, each system will have a different
communications access path to instances. Users can select different DB2 database
systems as coordinator partitions for communications and thereby redistribute the
workload.
Related reference:
v “Miscellaneous variables” in Performance Guide
Chapter 2. Creating and using the DB2 Administration Server (DAS) 111
where db2path is the path referenced in the DB2PATH environment variable, and
DB2DAS00 is the name of the DAS service. The DAS name can be obtained by
typing the db2admin command without any arguments.
If the DB2INSTPROF environment variable is set:
x:\db2instprof\DB2DAS00\dump
where x: is the drive referenced in the DB2PATH environment variable,
db2instprof is the instance profile directory, and DB2DAS00 is the name of the
DAS service.
v On Linux and UNIX systems:
$DASHOME/das/dump
where $DASHOME is the home directory of the DAS user.
Note: You should clean out the dump directory periodically to keep it from
becoming too large.
The format of the DB2 administration server log file (db2dasdiag.log) is similar to
the format of the DB2 FFDC log file db2diag.log. Refer to the section on
interpreting the administration logs in the troubleshooting topics for information
about how to interpret the db2dasdiag.log file.
Related concepts:
v “DB2 Administration Server” on page 91
The previous chapter focused on the information you need to know before creating
a database. That chapter also covered several topics and tasks you must perform
before creating a database.
The second last chapter in this part presents what you must consider before
altering a database. In addition, the chapter explains how to alter or drop database
objects.
Creating a database
You can create a database using the CREATE DATABASE command.
When you create a database, each of the following tasks are done for you:
v Setting up of all the system catalog tables that are needed by the database
v Allocation of the database recovery log
v Creation of the database configuration file and the default values are set
v Binding of the database utilities to the database
The Configuration Advisor helps you to tune performance and to balance memory
requirements for a single database per instance by suggesting which configuration
parameters to modify and providing suggested values for them. In Version 9.1, the
Configuration Advisor is automatically invoked when you create a database. To
disable this feature, or to explicitly enable it, use the db2set command before
creating the database. Examples:
db2set DB2_ENABLE_AUTOCONFIG_DEFAULT=NO
db2set DB2_ENABLE_AUTOCONFIG_DEFAULT=YES
See Automatic features enabled by default for other DB2 features that are enabled
by default.
Prerequisites:
You should have spent sufficient time designing the contents, layout, potential
growth, and use of your database before you create it.
Procedure:
1. Expand the object tree until you find the Databases folder.
2. Right-click the Databases folder, and select Create —> Standard or Create —> With
Automatic Maintenance from the pop-up menu.
3. Follow the steps to complete this task.
At the same time a database is created, a detailed deadlocks event monitor is also
created. As with any monitor, there is some overhead associated with this event
monitor. If you do not want the detailed deadlocks event monitor, then the event
monitor can be dropped using the command:
DROP EVENT MONITOR db2detaildeadlock
To limit the amount of disk space that this event monitor consumes, the event
monitor deactivates, and a message is written to the administration notification
log, once it has reached its maximum number of output files. Removing output
files that are no longer needed allows the event monitor to activate again on the
next database activation.
You have the ability to create a database in a different, possibly remote, database
manager instance. In this type of environment you have the ability to perform
instance-level administration against an instance other than your default instance,
including remote instances.
By default, databases are created in the code page of the application creating them.
Therefore, if you create your database from a Unicode (UTF-8) client, your
database will be created as a Unicode database. Similarly, if you create your
database from an en_US (code page 819) client, your database will be created as a
single byte US English database.
To override the default code page for the database, it is necessary to specify the
desired code set and territory when creating the database. See the CREATE
DATABASE CLP command or the sqlecrea API for information on setting the code
set and territory.
In a future release of the DB2 database manager, the default code set will be
changed to UTF-8 when creating a database, regardless of the application code
page. If a particular code set and territory is needed for a database, then the code
set and territory should be specified when the database is created.
Related concepts:
v “Additional database design considerations” in Administration Guide: Planning
v “Applications connected to Unicode databases” in Developing SQL and External
Routines
v “Automatic features enabled by default” in Administration Guide: Planning
Related tasks:
v “Converting non-Unicode databases to Unicode” in Administration Guide:
Planning
v “Creating a Unicode database” in Administration Guide: Planning
v “Changing node and database configuration files” on page 279
v “Generating recommendations for database configuration” on page 84
Related reference:
v “sqlecrea API - Create database” in Administrative API Reference
v “CREATE DATABASE command” in Command Reference
Related concepts:
v “Database partition groups” in Administration Guide: Planning
Related reference:
v “ADD DBPARTITIONNUM command” in Command Reference
v “DROP DBPARTITIONNUM VERIFY command” in Command Reference
Prerequisites:
Procedure:
1. Expand the object tree until you see the Database partition groups folder.
2. Right-click the Database partition groups folder, and select Create from the pop-up
menu.
3. On the Create Database partition groups window, complete the information, use the
arrows to move nodes from the Available nodes box to the Selected database
partitions box, and click OK.
For example, assume that you want to load some tables on a subset of the database
partitions in your database. You would use the following command to create a
database partition group of two database partitions (1 and 2) in a database
consisting of at least three (0 to 2) database partitions:
CREATE DATABASE PARTITION GROUP mixng12 ON PARTITIONS (1,2)
The CREATE DATABASE command or sqlecrea() API also create the default
system database partition groups, IBMDEFAULTGROUP, IBMCATGROUP, and
IBMTEMPGROUP.
Related concepts:
v “Database partition groups” in Administration Guide: Planning
v “Distribution maps” in Administration Guide: Planning
Related reference:
v “CREATE DATABASE command” in Command Reference
v “CREATE DATABASE PARTITION GROUP statement” in SQL Reference, Volume
2
v “sqlecrea API - Create database” in Administrative API Reference
You can use the Partitions view in the Control Center to perform the following
tasks:
v Start partitions
v Stop partitions
If requested to do so from IBM Support, run the trace utility using the options that
they recommend. The trace utility records information about DB2 operations and
formats this information into readable form. For more information, see db2trc -
Trace: DB2 topic.
Attention: Only use the trace facility when directed by DB2 Customer Service or
by a technical support representative to do so.
Use the Diagnostic Log window to view text information logged by the DB2 trace
utility.
To work with database partitions, you will need authority to attach to an instance.
Anyone with SYSADM or DBADM authority can grant you with the authority to
access a specific instance.
To view the DB2 logs, you will need authority to attach to an instance. Anyone
with SYSADM or DBADM authority can grant you with the authority to access a
specific instance.
Procedure:
v Open the Partitions view: From the Control Center, expand the object tree until
you find the instance for which you want to view the partitions. Right-click on
the instance you want and select Open–>Partitions from the pop-up menu. The
Partitions view opens.
v To start partitions: Highlight one or more partitions and select Partitions–>Start.
The selected partitions are started.
v To stop partitions: Highlight one or more partitions and select Partitions–>Stop.
The selected partitions are stopped.
v To run the trace utility on a partition:
1. Open the DB2 Trace window: Highlight a partition, and select
Partitions–>Service–>Trace. The DB2 Trace window opens.
2. Specify the trace options.
3. Click Start to start recording information and Stop to stop recording
information.
4. Optional: View the trace output and the DB2 logs.
5. Send the trace output to IBM Support, if requested to do so.
Related concepts:
v “Adding database partitions in a partitioned database environment” on page 123
v “Attributes of detached data partitions” on page 354
v “Partitioned database environments” in Administration Guide: Planning
v “Partitioned database authentication considerations” on page 496
v “Partitioned databases” in Administration Guide: Planning
Related tasks:
v “Adding a database partition to a running database system” on page 119
v “Adding a database partition to a stopped database system on Windows” on
page 122
v “Adding a database partition to a stopped database system on UNIX” on page
120
v “Adding database partitions to an instance using the Add Partitions wizard” on
page 124
v “Adding data partitions to partitioned tables” on page 356
v “Adding database partitions using the Add Partitions launchpad” on page 125
v “Attaching a data partition” on page 346
v “Changing the database configuration across multiple database partitions” on
page 281
v “Creating a table in a partitioned database environment” on page 191
Procedure:
Related concepts:
v “Adding database partitions in a partitioned database environment” on page 123
Related tasks:
v “Adding a database partition to a stopped database system on UNIX” on page
120
v “Adding a database partition to a stopped database system on Windows” on
page 122
Prerequisites:
You must install the new server if it does not exist, including the following tasks:
v Making executables accessible (using shared file-system mounts or local copies)
v Synchronizing operating system files with those on existing processors
v Ensuring that the sqllib directory is accessible as a shared file system
v Ensuring that the relevant operating system parameters (such as the maximum
number of processes) are set to the appropriate values
You must also register the host name with the name server or in the hosts file in
the etc directory on all database partitions.
Procedure:
Note: You might have to issue the DB2START command twice for all database
partition servers to access the new db2nodes.cfg file.
6. Back up all databases on the new database partition. (Optional)
7. Redistribute data to the new database partition. (Optional)
Related concepts:
v “Error recovery when adding database partitions” on page 128
Related tasks:
v “Adding a database partition to a running database system” on page 119
v “Adding a database partition to a stopped database system on Windows” on
page 122
v “Dropping a database partition” on page 125
Prerequisites:
You must install the new server before you can create a database partition on it.
Procedure:
Note: You might have to issue the DB2START command twice for all database
partition servers to access the new db2nodes.cfg file.
6. Back up all databases on the new database partition. (Optional)
7. Redistribute data to the new database partition. (Optional)
Related concepts:
v “Error recovery when adding database partitions” on page 128
v “Adding database partitions in a partitioned database environment” on page 123
Related tasks:
v “Adding a database partition to a running database system” on page 119
v “Adding a database partition to a stopped database system on UNIX” on page
120
If your system is stopped, you use db2start. If it is running, you can use any of
the other choices.
You cannot use a database on the new database partition to contain data until one
or more database partition groups are altered to include the new database
partition.
Note: If no databases are defined in the system and you are running Enterprise
Server Edition on a UNIX operating system, edit the db2nodes.cfg file to
add a new database partition definition; do not use any of the procedures
described, as they apply only when a database exists.
Related tasks:
v “Adding a database partition to a running database system” on page 119
v “Adding a database partition to a stopped database system on Windows” on
page 122
v “Dropping a database partition” on page 125
Prerequisites:
To work with database partition groups, you must have SYSADM or DBADM
authority.
Related concepts:
v “Partitioned databases” in Administration Guide: Planning
It is recommended that you backup all databases in the instance before and after
redistributing data in database partition groups. If you do not back up your
databases, you might corrupt the databases and you might not be able to recover
them.
Procedure:
To add partitions:
1. Optional: Back up the database.
2. Open the Add Partitions launchpad: From the Control Center, expand the
object tree until you find the instance object that you want to work with.
Right-click the object, and click Add Partitions in the pop-up menu. The Add
Partitions launchpad opens.
3. Add partitions.
4. Redistribute data.
5. Optional: Back up the database.
Related tasks:
v “Backing up data using the Backup wizard” on page 387
v “Redistributing data in a database partition group” on page 128
v “Adding database partitions to an instance using the Add Partitions wizard” on
page 124
Prerequisites:
Also ensure that all transactions for which this database partition was the
coordinator have all committed or rolled back successfully. This might require
doing crash recovery on other servers. For example, if you drop the coordinator
partition, and another database partition participating in a transaction crashed
before the coordinator partition was dropped, the crashed database partition will
not be able to query the coordinator partition for the outcome of any in-doubt
transactions.
Procedure:
Related concepts:
v “Management of database server capacity” on page 29
v “Adding database partitions in a partitioned database environment” on page 123
Related reference:
v “DROP DBPARTITIONNUM VERIFY command” in Command Reference
v “sqledrpn API - Check whether a database partition server can be dropped” in
Administrative API Reference
Note: When you drop database partitions from database partition groups the
database partitions are not immediately dropped. Instead, the database
partitions that you want to drop are flagged so that data can be move off
them when you redistribute the data in the database partition groups.
It is recommended that you backup all databases in the instance before and after
redistributing data in database partition groups. If you do not back up your
databases, you might corrupt the databases and you might not be able to recover
them.
Procedure:
Note:
v You must drop the database partitions from database partition
groups before you drop partitions from the instance.
v This operation does not drop the database partitions immediately.
Instead, it flags the database partitions that you want to drop so
that data can be move off them when you redistribute the data in
the database partition group.
4. Redistribute data.
5. Drop partitions from the instance:
a. Open the Drop Partitions from Instance Confirmation window:
v Open the Partitions window as described above.
v Select the partitions you want to drop.
v Right-click the selected partitions and click Drop in the pop-up menu.
The Drop Partitions launchpad opens.
v Click the Drop Partitions from Instance push button. The Drop Partitions
from Instance Confirmation window opens.
b. In the Drop column, verify that you want to drop the partitions for the
selected instance.
Related concepts:
v “Partitioned databases” in Administration Guide: Planning
Related tasks:
v “Backing up data using the Backup wizard” on page 387
v “Redistributing data in a database partition group” on page 128
Prerequisites:
To work with database partition groups, you must have SYSADM or DBADM
authority.
Procedure:
Related concepts:
v “Logs” in Administration Guide: Planning
v “Partitioned databases” in Administration Guide: Planning
Related tasks:
v “Adding database partitions using the Add Partitions launchpad” on page 125
v “Dropping database partitions from the instance using the Drop Partitions
launchpad” on page 127
System buffer pools are used in database partition addition scenarios in the
following circumstances:
v You add database partitions to a partitioned database environment that has one
or more system temporary table spaces with a page size that is different from
the default of 4 KB. When a database partition is created, only the
IBMDEFAULTDP buffer pool exists, and this buffer pool has a page size of 4 KB.
Consider the following examples:
1. You use the db2start command to add a database partition to the current
multi-partition database:
DB2START DBPARTITIONNUM 2 ADD DBPARTITIONNUM HOSTNAME newhost PORT 2
2. You use the ADD DBPARTITIONNUM command after you manually update
the db2nodes.cfg file with the new database partition description.
One way to prevent these problems is to specify the WITHOUT TABLESPACES
clause on the ADD NODE or the db2start command. After doing this, you need
to use the CREATE BUFFERPOOL statement to create the buffer pools using ,
and associate the system temporary table spaces to the buffer pool using the
ALTER TABLESPACE statement.
v You add database partitions to an existing database partition group that has one
or more table spaces with a page size that is different from the default page size,
which is 4 KB. This occurs because the non-default page-size buffer pools
created on the new database partition have not been activated for the table
spaces.
Note: If the database partition group has table spaces with the default page size,
message SQL1759W is returned:
Related tasks:
v “Adding a database partition to a running database system” on page 119
v “Adding a database partition to a stopped database system on Windows” on
page 122
Related concepts:
v “rah and db2_all commands overview” on page 130
v “Specifying the rah and db2_all commands” on page 132
Related reference:
v “rah and db2_all command descriptions” on page 131
The command can be almost anything which you could type at an interactive
prompt, including, for example, multiple commands to be run in sequence. On
Linux and UNIX platforms, you separate multiple commands using a semicolon (;).
On Windows, you separate multiple commands using an ampersand (&). Do not
use the separator character following the last command.
The following example shows how to use the db2_all command to change the
database configuration on all database partitions that are specified in the node
configuration file. Because the ; character is placed inside double quotation marks,
the request will run concurrently:
db2_all ";DB2 UPDATE DB CFG FOR sample USING LOGFILSIZ 100"
Related concepts:
v “Issuing commands in a partitioned database environment” on page 130
v “Specifying the rah and db2_all commands” on page 132
Related reference:
v “rah and db2_all command descriptions” on page 131
On Linux and UNIX platforms, these commands execute rah with certain implicit
settings such as:
v Run in parallel at all computers
v Buffer command output in /tmp/$USER/db2_kill, /tmp/$USER/db2_call_stack
respectively.
If you specify the command as the parameter on the command line, you must
enclose it in double quotation marks if it contains any of the special characters just
listed.
Note: On Linux and UNIX platforms, the command will be added to your
command history just as if you typed it at the prompt.
All special characters in the command can be entered normally (without being
enclosed in quotation marks, except for \). If you need to include a \ in your
command, you must type two backslashes (\\).
Note: On Linux and UNIX platforms, if you are not using a Korn shell, all special
characters in the command can be entered normally (without being enclosed
in quotation marks, except for ", \, unsubstituted $, and the single quotation
mark (')). If you need to include one of these characters in your command,
you must precede them by three backslashes (\\\). For example, if you
need to include a \ in your command, you must type four backslashes
(\\\\).
If you need to include a double quotation mark (") in your command, you must
precede it by three backslashes, for example, \\\".
Notes:
1. On Linux and UNIX platforms, you cannot include a single quotation mark (')
in your command unless your command shell provides some way of entering a
single quotation mark inside a singly quoted string.
2. On Windows, you cannot include a single quotation mark (') in your command
unless your command window provides some way of entering a single
quotation mark inside a singly quoted string.
When you run any korn-shell shell-script which contains logic to read from stdin
in the background, you should explicitly redirect stdin to a source where the
process can read without getting stopped on the terminal (SIGTTIN message). To
redirect stdin, you can run a script with the following form:
shell_script </dev/null &
In a similar way, you should always specify </dev/null when running db2_all in
the background. For example:
By doing this you can redirect stdin and avoid getting stopped on the terminal.
An alternative to this method, when you are not concerned about output from the
remote command, is to use the “daemonize” option in the db2_all prefix:
db2_all ";daemonize_this_command" &
Related concepts:
v “Additional rah information (Solaris and AIX only)” on page 135
v “Running commands in parallel on Linux and UNIX platforms” on page 133
Related tasks:
v “Setting the default environment profile for rah on Windows” on page 141
Related reference:
v “Controlling the rah command” on page 139
v “rah and db2_all command descriptions” on page 131
v “rah command prefix sequences” on page 135
By default, the command is run sequentially at each computer, but you can specify
to run the commands in parallel using background rshells by prefixing the
command with certain prefix sequences. If the rshell is run in the background, then
each command puts the output in a buffer file at its remote computer. This process
retrieves the output in two pieces:
1. After the remote command completes.
2. After the rshell terminates, which might be later if some processes are still
running.
The name of the buffer file is /tmp/$USER/rahout by default, but it can be specified
by the environment variables $RAHBUFDIR/$RAHBUFNAME.
When you specify that you want the commands to be run concurrently, by default,
this script prefixes an additional command to the command sent to all hosts to
check that $RAHBUFDIR and $RAHBUFNAME are usable for the buffer file. It
creates $RAHBUFDIR. To suppress this, export an environment variable
RAHCHECKBUF=no. You can do this to save time if you know the directory exists and
is usable.
Related concepts:
v “Additional rah information (Solaris and AIX only)” on page 135
Related tasks:
v “Monitoring rah processes on Linux and UNIX platforms” on page 134
Related reference:
v “Determining problems with rah on Linux and UNIX platforms” on page 141
v “rah command prefix sequences” on page 135
Note: The information in this section applies to Linux and UNIX platforms only.
While any remote commands are still running or buffered output is still being
accumulated, processes started by rah monitor activity to:
v Write messages to the terminal indicating which commands have not been run
v Retrieve buffered output.
The primary monitoring process is a command whose command name (as shown
by the ps command) is rahwaitfor. The first informative message tells you the pid
(process id) of this process. All other monitoring processes will appear as ksh
commands running the rah script (or the name of the symbolic link). If you want,
you can stop all monitoring processes by the command:
kill <pid>
where <pid> is the process ID of the primary monitoring process. Do not specify a
signal number. Leave the default of 15. This will not affect the remote commands
at all, but will prevent the automatic display of buffered output. Note that there
might be two or more different sets of monitoring processes executing at different
times during the life of a single execution of rah. However, if at any time you stop
the current set, then no more will be started.
If your regular login shell is not a Korn shell (for example /bin/ksh), you can use
rah, but there are some slightly different rules on how to enter commands
containing the following special characters:
" unsubstituted $ '
For more information, type rah "?". Also, in a Linux and UNIX environment, if
the login shell at the ID which executes the remote commands is not a Korn shell,
then the login shell at the ID which executes rah must also not be a Korn shell.
(rah makes the decision as to whether the remote ID’s shell is a Korn shell based
Related concepts:
v “Additional rah information (Solaris and AIX only)” on page 135
v “Running commands in parallel on Linux and UNIX platforms” on page 133
Related concepts:
v “Running commands in parallel on Linux and UNIX platforms” on page 133
Related tasks:
v “Monitoring rah processes on Linux and UNIX platforms” on page 134
or
rah ";\ mydaemon"
When using the <<−nnn< and <<+nnn< prefix sequences, nnn is any 1-, 2- or 3-digit
database partition number which must match the nodenum value in the
db2nodes.cfg file.
Note: Prefix sequences are considered to be part of the command. If you specify a
prefix sequence as part of a command, you must enclose the entire
command, including the prefix sequences, in double quotation marks.
Related concepts:
v “Running commands in parallel on Linux and UNIX platforms” on page 133
v “Specifying the rah and db2_all commands” on page 132
Related reference:
v “rah and db2_all command descriptions” on page 131
Related tasks:
v “Eliminating duplicate entries from a list of computers in a partitioned database
environment” on page 138
If you are running DB2 Enterprise Server Edition with multiple logical database
partition servers on one computer, your db2nodes.cfg file will contain multiple
entries for that computer. In this situation, the rah command needs to know
whether you want the command to be executed once only on each computer or
once for each logical database partition listed in the db2nodes.cfg file. Use the rah
command to specify computers. Use the db2_all command to specify logical
database partitions.
Note: On Linux and UNIX platforms, if you specify computers, rah will normally
eliminate duplicates from the computer list, with the following exception: if
you specify logical database partitions, db2_all prepends the following
assignment to your command:
export DB2NODE=nnn (for Korn shell syntax)
where nnn is the database partition number taken from the corresponding
line in the db2nodes.cfg file, so that the command will be routed to the
desired database partition server.
When specifying logical database partitions, you can restrict the list to include all
logical database partitions except one, or only specify one database partition server
using the <<−nnn< and <<+nnn< prefix sequences. You might want to do this if you
want to run a command to catalog the database partition first, and when that has
completed, run the same command at all other database partition servers, possibly
in parallel. This is usually required when running the db2 restart database
command. You will need to know the database partition number of the catalog
partition to do this.
If you execute db2 restart database using the rah command, duplicate entries are
eliminated from the list of computers. However if you specify the ” prefix, then
Related tasks:
v “Specifying the list of computers in a partitioned database environment” on
page 137
Related reference:
v “RESTART DATABASE command” in Command Reference
v “rah command prefix sequences” on page 135
Note: On Linux and UNIX platforms, the value of $RAHENV where rah is run is
used, not the value (if any) set by the remote shell.
Related reference:
v “Using $RAHDOTFILES on Linux and UNIX platforms” on page 140
Following are the .files that are run if no prefix sequence is specified:
P .profile
E File named in $RAHENV (probably .kshrc)
K Same as E
PE .profile followed by file named in $RAHENV (probably .kshrc)
B Same as PE
N None (or Neither)
Note: If your login shell is not a Korn shell, any dot files you specify to be
executed will be executed in a Korn shell process, and so must conform to
Korn shell syntax. So, for example, if your login shell is a C shell, to have
Also, it is essential that your .cshrc does not write to stdout if there is no
tty (as when invoked by rsh). You can ensure this by enclosing any lines
which write to stdout by, for example,
if { tty -s } then echo "executed .cshrc";
endif
Related reference:
v “Controlling the rah command” on page 139
You can specify all the environment variables that you need to initialize the
environment for rah.
Related concepts:
v “Specifying the rah and db2_all commands” on page 132
Note: You might have a need to have greater security regarding the
transmission of passwords in clear text between database partitions. This
will depend on the remote shell program you are using. rah uses the
remote shell program specified by the DB2RSHCMD registry variable.
You can select between the two remote shell programs: ssh (for
additional security), or rsh (or remsh for HP-UX). If this registry variable
is not set, rsh (or remsh for HP-UX) is used.
3. When running commands in parallel using background remote shells, although
the commands run and complete within the expected elapsed time at the
computers, rah takes a long time to detect this and put up the shell prompt.
The ID running rah does not have one of the computers correctly defined in its
.rhosts file, or if the DB2RSHCMD registry variable has been configured to use
ssh, then the ssh clients and servers on each computer might not be configured
correctly.
4. Although rah runs fine when run from the shell command line, if you run rah
remotely using rsh, for example,
rsh somewher -l $USER db2_kill
or use a method that makes the shell choose a unique name automatically such
as:
RAHBUFNAME=rahout.$$ db2_all "....."
Whatever method you use, you must ensure you clean up the buffer files at
some point if disk space is limited. rah does not erase a buffer file at the end of
execution, although it will erase and then re-use an existing file the next time
you specify the same buffer file.
6. You entered
rah ’"print from ()’
Related reference:
v “Controlling the rah command” on page 139
When using this command as shown, the default instance is the current instance
(set by the DB2INSTANCE environment variable). To specify a particular instance,
you can specify the instance using:
db2nlist /i:instName
You can also optionally request the status of each database partition server by
using:
db2nlist /s
The status of each database partition server might be one of: starting, running,
stopping, or stopped.
Related tasks:
v “Adding a database partition server to an instance (Windows)” on page 144
Note: Do not use the db2ncrt command if the instance already contains databases.
Instead, use the db2start addnode command. This ensures that the database
is correctly added to the new database partition server. DO NOT EDIT the
db2nodes.cfg file, since changing the file might cause inconsistencies in the
partitioned database environment.
The logical port parameter is only optional when you create the first database
partition on a computer. If you create a logical database partition, you must specify
this parameter and select a logical port number that is not in use. There are several
restrictions:
v On every computer there must be a database partition server with a logical port
0.
v The port number cannot exceed the port range reserved for FCM
communications in the services file in %SystemRoot%\system32\drivers\etc
directory. For example, if you reserve a range of four ports for the current
instance, then the maximum port number would be 3 (ports 1, 2, and 3; port 0 is
for the default logical database partition). The port range is defined when
db2icrt is used with the /r:base_port, end_port parameter.
For example, if you want to add a new database partition server to the instance
TESTMPP (so that you are running multiple logical database partitions) on the
instance-owning computer MYMACHIN, and you want this new database
partition to be known as database partition 2 using logical port 1, enter:
db2ncrt /n:2 /p:1 /u:my_id,my_pword /i:TESTMPP
/M:TEST /o:MYMACHIN
Related reference:
v “db2icrt - Create instance command” in Command Reference
v “db2ncrt - Add database partition server to an instance command” in Command
Reference
v “db2start - Start DB2 command” in Command Reference
The parameter /n: is the number of the database partition server’s configuration
you want to change. This parameter is required.
For example, to change the logical port assigned to database partition 2, which
participates in the instance TESTMPP, to use the logical port 3, enter the following
command:
db2nchg /n:2 /i:TESTMPP /p:3
The DB2 database manager provides the capability of accessing DB2 database
system registry variables at the instance level on a remote computer. Currently,
DB2 database system registry variables are stored in three different levels:
computer or global level, instance level, and database partition level. The registry
variables stored at the instance level (including the database partition level) can be
redirected to another computer by using DB2REMOTEPREG. When
DB2REMOTEPREG is set, the DB2 database manager will access the DB2 database
system registry variables from the computer pointed to by DB2REMOTEPREG. The
db2set command would appear as:
db2set DB2REMOTEPREG=<remote workstation>
Note:
v Care should be taken in setting this option since all DB2 database instance
profiles and instance listings will be located on the specified remote
computer name.
v If your environment includes users from domains, ensure that the logon
account associated with the DB2 instance service is a domain account.
This ensures that the DB2 instance has the appropriate privileges to
enumerate groups at the domain level.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
Related reference:
v “db2nchg - Change database partition server configuration command” in
Command Reference
Exercise caution when you drop database partition servers from an instance. If you
drop the instance-owning database partition server zero (0) from the instance, the
instance will become unusable. If you want to drop the instance, use the db2idrop
command.
Note: Do not use the db2ndrop command if the instance contains databases.
Instead, use the db2stop drop nodenum command. This ensures that the
database is correctly removed from the database partition. DO NOT EDIT
the db2nodes.cfg file, since changing the file might cause inconsistencies in
the partitioned database environment.
If you want to drop a database partition that is assigned the logical port 0 from a
computer that is running multiple logical database partitions, you must drop all
the other database partitions assigned to the other logical ports before you can
drop the database partition assigned to logical port 0. Each database partition
server must have a database partition assigned to logical port 0.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
Related reference:
v “db2idrop - Remove instance command” in Command Reference
v “db2ndrop - Drop database partition server from an instance command” in
Command Reference
v “db2stop - Stop DB2 command” in Command Reference
A table space lets you assign the location of data to particular logical devices or
portions thereof. For example, when creating a table you can specify that its
indexes or its long columns with long or large object (LOB) data be kept away
from the rest of the table data.
A table space can be spread over one or more physical storage devices (containers)
for increased performance. However, it is recommended that all the devices or
containers within a table space have similar performance characteristics.
Related concepts:
v “Container” on page 455
Note: When you first create a database no user temporary table space is created.
If you do not specify any table space parameters with the CREATE DATABASE
command, the database manager creates these table spaces using system managed
storage (SMS) directory containers. These directory containers are created in the
subdirectory created for the database. The extent size for these table spaces is set to
the default.
If you do use the CREATE DATABASE command, you can specify the page size
for the default buffer pool and the intial table spaces. This default also represents
the default page size for all future CREATE BUFFERPOOL and CREATE
TABLESPACE statements. If you do not specify the page size when creating the
database, the default page size is 4 KB.
Prerequisites:
The database must be created and you must have the authority to create table
spaces.
Procedure:
1. Expand the object tree until you see the Databases folder.
2. Right-click the Databases folder, and select Create —> Standard or Create —> With
Automatic Maintenance from the pop-up menu.
3. Follow the steps to complete this task.
If you do not want to use the default definition for these table spaces, you might
specify their characteristics on the CREATE DATABASE command. For example,
the following command could be used to create your database on Windows:
CREATE DATABASE PERSONL
CATALOG TABLESPACE
MANAGED BY SYSTEM USING (’d:\pcatalog’,’e:\pcatalog’)
EXTENTSIZE 16 PREFETCHSIZE 32
USER TABLESPACE
MANAGED BY DATABASE USING (FILE’d:\db2data\personl’ 5000,
FILE’d:\db2data\personl’ 5000)
EXTENTSIZE 32 PREFETCHSIZE 64
TEMPORARY TABLESPACE
MANAGED BY SYSTEM USING (’f:\db2temp\personl’)
WITH "Personnel DB for BSchiefer Co"
In this example, the definition for each of the initial table spaces is explicitly
provided. You only need to specify the table space definitions for those table
spaces for which you do not want to use the default definition.
Related concepts:
v “System catalog tables” on page 175
v “Table space design” in Administration Guide: Planning
Related tasks:
v “Creating a table space” on page 149
Related reference:
v “CREATE DATABASE command” in Command Reference
When you create a database, three initial table spaces are created. The page size for
the three initial table spaces is based on the default that is established or accepted
when you use the CREATE DATABASE command. This default also represents the
default page size for all future CREATE BUFFERPOOL and CREATE TABLESPACE
statements. If you do not specify the page size when creating the database, the
default page size is 4 KB. If you do not specify the page size when creating a table
space, the default page size is the one set when you created the database.
Prerequisites:
You must know the device or file names of the containers that you will reference
when creating your table spaces. In addition, you must know the space associated
with each device or file name that you will allocate to the table space.
Procedure:
1. Expand the object tree until you see the Table spaces folder.
2. Right-click the Table spaces folder, and select Create —> Table Space Using Wizard
from the pop-up menu.
3. Follow the steps in the wizard to complete your task.
The following SQL statement creates an SMS table space on Windows using three
directories on three separate drives:
CREATE TABLESPACE RESOURCE
MANAGED BY SYSTEM
USING (’d:\acc_tbsp’, ’e:\acc_tbsp’, ’f:\acc_tbsp’)
The following SQL statement creates a DMS table space using two file containers,
each with 5,000 pages:
CREATE TABLESPACE RESOURCE
MANAGED BY DATABASE
USING (FILE’d:\db2data\acc_tbsp’ 5000,
FILE’e:\db2data\acc_tbsp’ 5000)
In the previous two examples, explicit names are provided for the containers.
However, if you specify relative container names, the container is created in the
subdirectory created for the database.
When creating table space containers, the database manager creates any directory
levels that do not exist. For example, if a container is specified as
Starting with DB2 Universal Database Version 8.2, FixPak 4, any directories created
by the database manager are created with PERMISSION 700. This means that only
the owner has read, write, and execute access.
The assumption in the previous examples is that the table spaces are not associated
with a specific database partition group. The default database partition group
IBMDEFAULTGROUP is used when the following parameter is not specified in the
statement:
IN database_partition_group_name
The following SQL statement creates a DMS table space on a Linux and UNIX
system using three logical volumes of 10 000 pages each, and specifies their I/O
characteristics:
CREATE TABLESPACE RESOURCE
MANAGED BY DATABASE
USING (DEVICE ’/dev/rdblv6’ 10000,
DEVICE ’/dev/rdblv7’ 10000,
DEVICE ’/dev/rdblv8’ 10000)
OVERHEAD 7.5
TRANSFERRATE 0.06
The UNIX devices mentioned in this SQL statement must already exist, and the
instance owner and the SYSADM group must be able to write to them.
UNIX devices are classified into two categories: character serial devices and
block-structured devices. For all file-system devices, it is normal to have a
corresponding character serial device (or raw device) for each block device (or
cooked device). The block-structured devices are typically designated by names
similar to “hd0” or “fd0”. The character serial devices are typically designated by
names similar to “rhd0”, “rfd0”, or “rmt0”. These character serial devices have
faster access than block devices. The character serial device names should be used
on the CREATE TABLESPACE command and not block device names.
The overhead and transfer rate help to determine the best access path to use when
the SQL statement is compiled. The current defaults for new table spaces in
databases created in DB2 Version 9.1 or later are:
v OVERHEAD 7.5 ms
v TRANSFERRATE 0.06 ms
New table spaces in databases created in earlier versions of DB2 use the following
defaults:
v OVERHEAD 12.67 ms
v TRANSFERRATE 0.18 ms
DB2 can greatly improve the performance of sequential I/O using the sequential
prefetch facility, which uses parallel I/O.
You can also create a table space that uses a page size larger than the default 4 KB
size. The following SQL statement creates an SMS table space on a Linux and
UNIX system with an 8 KB page size.
CREATE TABLESPACE SMS8K
PAGESIZE 8192
MANAGED BY SYSTEM
USING (’FSMS_8K_1’)
BUFFERPOOL BUFFPOOL8K
Notice that the associated buffer pool must also have the same 8 KB page size.
The created table space cannot be used until the buffer pool it references is
activated.
You can use the ALTER TABLESPACE SQL statement to add, drop, or resize
containers to a DMS table space and modify the PREFETCHSIZE, OVERHEAD,
and TRANSFERRATE settings for a table space. You should commit the transaction
You should also consider letting the DB2 database system automatically
determine the prefetch size.
Concurrent I/O (CIO) includes the advantages of DIO and also relieves the
serialization of write accesses.
DIO and CIO are supported on AIX; DIO is supported on HP-UX, Solaris
Operating Environment, Linux, and Windows operating systems.
The keywords NO FILE SYSTEM CACHING and FILE SYSTEM CACHING are
part of the CREATE and ALTER TABLESPACE SQL statements to allow you to
specify whether DIO or CIO is to be used with each table space. When NO FILE
SYSTEM CACHING is in effect, the database manager attempts to use Concurrent
I/O (CIO) wherever possible. In cases where CIO is not supported (for example, if
JFS is used), DIO is used instead.
When you issue the CREATE TABLESPACE statement, the dropped table recovery
feature is turned on by default. This feature lets you recover dropped table data
using table space-level restore and rollforward operations. This is useful because it
is faster than database-level recovery, and your database can remain available to
users.
However, the dropped table recovery feature can have some performance impact
on forward recovery when there are many drop table operations to recover or
when the history file is very large.You might want to disable this feature if you
plan to run numerous drop table operations, and you either uses circular logging
or you do not think you will want to recover any of the dropped tables. To disable
this feature, you can explicitly set the DROPPED TABLE RECOVERY option to
OFF when you issue the CREATE TABLESPACE statement. Alternatively, you can
turn off the dropped table recovery feature for an existing table space using the
ALTER TABLESPACE statement.
Related concepts:
v “Table space design” in Administration Guide: Planning
v “Table spaces in database partition groups” on page 163
v “Database managed space” in Administration Guide: Planning
v “System managed space” in Administration Guide: Planning
v “Sequential prefetching” in Performance Guide
Related tasks:
Related reference:
v “ALTER TABLESPACE statement” in SQL Reference, Volume 2
v “CREATE TABLESPACE statement” in SQL Reference, Volume 2
The containers associated with SMS table spaces are file system directories and the
files within these directories grow as the objects in the table space grow. The files
grow until a file system limit has been reached on one of the containers or until
the database’s table space size limit is reached (see SQL and XQuery limits).
DMS table spaces are made up of file containers or raw device containers, and
their sizes are set when the containers are assigned to the table space. The table
space is considered full when all of the space within the containers has been used.
However, unlike SMS, you can add or extend containers using the ALTER
TABLESPACE statement, allowing more storage space to be given to the table
space. DMS table spaces also have a feature called “auto-resize”. As space is
consumed in a DMS table space that can be automatically resized, the DB2
database system might extend one or more file containers. SMS table spaces have
similar capabilities for growing automatically but the term “auto-resize” is used
exclusively for DMS.
By default, the auto-resize feature is not enabled for a DMS table space. The
following statement creates a DMS table space that does not have auto-resize
enabled:
CREATE TABLESPACE DMS1 MANAGED BY DATABASE
USING (FILE ’/db2files/DMS1’ 10 M)
To enable the auto-resize feature specify the AUTORESIZE YES clause as part of
the CREATE TABLESPACE statement:
CREATE TABLESPACE DMS1 MANAGED BY DATABASE
USING (FILE ’/db2files/DMS1’ 10 M) AUTORESIZE YES
You can also enable or disable the auto-resize feature after a DMS table space has
been created by using the AUTORESIZE clause on the ALTER TABLESPACE
statement:
ALTER TABLESPACE DMS1 AUTORESIZE YES
ALTER TABLESPACE DMS1 AUTORESIZE NO
The MAXSIZE NONE clause specifies that there is no maximum limit for the table
space. The table space can grow until a file system limit or until the DB2 table
space limit has been reached (see the SQL Limits section in the SQL Reference). No
maximum limit is the default if the MAXSIZE clause is not specified when the
auto-resize feature is enabled.
The ALTER TABLESPACE statement changes the value of MAXSIZE for a table
space that has auto-resize already enabled. For example:
ALTER TABLESPACE DMS1 MAXSIZE 1 G
ALTER TABLESPACE DMS1 MAXSIZE NONE
If a maximum size is specified, the actual value that DB2 enforces might be slightly
smaller than the value provided because DB2 attempts to keep container growth
consistent. It might not be possible to extend the containers by equal amounts and
reach the maximum size exactly.
A percentage value means that the increase size is calculated every time that the
table space needs to grow, and growth is based on a percentage of the table space
size at that time. For example, if the table space is 20 megabytes in size and the
increase size is 50 percent, the table space grows by 10 megabytes the first time (to
a size of 30 megabytes) and by 15 megabytes the next time.
If a size increase is specified, the actual value used by DB2 might be slightly
different than the value provided. This adjustment in the value used is done to
keep growth consistent across the containers in the table space.
For table spaces that can be automatically resized, DB2 attempts to increase the
size of the table space when all of the existing space has been used and a request
Keeping in mind that DB2 uses a small portion (one extent) of each container for
meta-data, here is the table space map that is created for the table space based on
the CREATE TABLESPACE statement. (The table space map is part of the output
from a table space snapshot).
Table space map:
The table space map shows that the containers with an identifier of 2 and 3
(E:\TS1CONT and F:\TS1CONT) are the only containers in the last range of the map.
Therefore, when DB2 automatically extends the containers in this table space, it
will extend only those two containers.
Note: If a table space is created with all of the containers having the same size,
there is only one range in the map. In such a case, DB2 extends each of the
containers. To prevent restricting extensions to only a subset of the
containers, create a table space with containers of equal size.
As discussed in the MAXSIZE section, a limit on the maximum size of the table
space can be specified, or a value of NONE can be provided, which allows for no
limit on the growth. (When NONE or no limit is used, the upper limit is actually
defined by the file system limit or by the DB2 table space limit.) DB2 does not
attempt to increase the table space past the upper limit. However, before that limit
is reached, an attempt to increase a container might fail due to a full file system. In
this case, DB2 does not increase the table space any further and an “out of space”
condition will be returned to the application.
For example, a table space has three containers that are the same size and each
resides on its own file system. As work is done against the table space, DB2
automatically extends these three containers. Eventually, one of the file systems
becomes full, and the corresponding container can no longer grow. If more free
space cannot be made available on the file system you must perform container
operations against the table space such that the container in question is no longer
in the last range of the table space map. In this case, you could add a new stripe
set specifying two containers (one on each of the file systems that still has space),
or you could specify more or fewer containers (again, making sure that each
container being added is the same size and that there is sufficient room for growth
on each of the file systems being used). When DB2 attempts to increase the size of
the table space, it will now attempt to extend the containers in this new stripe set
instead of the older containers.
The situation described above only applies to automatic storage table spaces that
are not enabled for automatic resizing. If an automatic storage table space is
enabled for automatic resizing, DB2 handles the full file system condition
automatically by adding a new stripe set of containers.
Monitoring:
Automatic resizing for DMS table spaces is displayed as part of the table space
monitor snapshot output. The increase size and maximum size values are also
displayed:
Usage notes:
Related concepts:
v “How containers are added and extended in DMS table spaces” in Administration
Guide: Planning
v “Table space maps” in Administration Guide: Planning
v “Automatic storage databases” on page 54
Related reference:
v “ALTER TABLESPACE statement” in SQL Reference, Volume 2
v “CREATE TABLESPACE statement” in SQL Reference, Volume 2
A system temporary table space is used to store system temporary tables. When a
database is created, one of the three default table spaces defined is a system
temporary table space called “TEMPSPACE1”.
Prerequisites:
The containers to be associated with the system temporary table space must exist.
A database must always have at least one system temporary table space since
system temporary tables can only be stored in such a table space.
Procedure:
To create another system temporary table space, use the CREATE TABLESPACE
statement. For example,
CREATE SYSTEM TEMPORARY TABLESPACE tmp_tbsp
MANAGED BY SYSTEM
USING (’d:\tmp_tbsp’,’e:\tmp_tbsp’)
The only database partition group that can be specified when creating a system
temporary table space is IBMTEMPGROUP.
Related tasks:
v “Creating a user temporary table space” on page 159
Related reference:
v “CREATE TABLESPACE statement” in SQL Reference, Volume 2
Like regular table spaces, user temporary table spaces can be created in any
database partition group other than IBMTEMPGROUP. IBMDEFAULTGROUP is
the default database partition group that is used when creating a user temporary
table.
Procedure:
To create a user temporary table space, use the CREATE TABLESPACE statement:
CREATE USER TEMPORARY TABLESPACE usr_tbsp
MANAGED BY DATABASE
USING (FILE ’d:\db2data\user_tbsp’ 5000,
FILE ’e:\db2data\user_tbsp’ 5000)
Related tasks:
v “Creating a user-defined temporary table” on page 212
Related reference:
v “CREATE TABLESPACE statement” in SQL Reference, Volume 2
v “DECLARE GLOBAL TEMPORARY TABLE statement” in SQL Reference, Volume
2
To avoid this double caching, most file systems have a feature that disables caching
at the file system level. This is generically referred to as non-buffered I/O. On UNIX,
this feature is commonly known as Direct I/O (or DIO). On Windows, this is
equivalent to opening the file with the FILE_FLAG_NO_BUFFERING flag. In
addition, some file systems such as IBM JFS2 or VERITAS VxFS also support
enhanced Direct I/O, that is, the higher-performing Concurrent I/O (CIO) feature.
The DB2 database manager automatically takes advantage of CIO on file systems
As stated above, the DB2 database manager automatically enables file system
caching when performing I/O. To disable it, you can use the CREATE
TABLESPACE or ALTER TABLESPACE statements. Use the NO FILE SYSTEM
CACHING clause to enable non-buffered I/O, thus disabling file caching for a
particular table space. Once enabled, the DB2 database manager automatically
determines which of the DIO or CIO is to be used on all platforms. Given the
performance improvement in CIO, the DB2 database manager uses it whenever it
is supported; there is no user-interface to specify which one is to be used.
Prerequisites:
Table 16 shows the supported configuration for using table spaces without file
system caching. It also indicates whether DIO or enhance DIO will be used in each
case.
Table 16. Supported configuration for table spaces without file system caching.
Platforms File system type and minimum level DIO or CIO requests submitted by
required the DB2 database manager
AIX 5.2+ Journal File System (JFS) DIO
AIX 5.2+ Concurrent Journal File System (JFS2) CIO
AIX 5.2+ VERITAS Storage Foundation for DB2 CIO
4.0 (VxFS)
Note: The VERITAS Storage Foundation for the DB2 database manager might have
different operating system prerequisites. The platforms listed above is the
prerequisite for the current DB2 release. Consult the VERITAS Storage
Foundation for DB2 support information for the prerequisite information.
Procedure:
The recommended method of enabling non-buffered I/O is at the table space level,
using the DB2 implementation method. This method allows you to apply
non-buffered I/O on specific table spaces while avoiding any dependency on the
physical layout of the database. It also allows the DB2 database manager to
determine which I/O is best used for each file, buffered or non-buffered.
The clauses NO FILE SYSTEM CACHING and FILE SYSTEM CACHING can be
specified in the CREATE and ALTER TABLESPACE statements to disable or
enable file system caching, respectively. The default is FILE SYSTEM CACHING. In
the case of ALTER TABLESPACE, existing connections to the database must be
terminated before the new caching policy takes effect.
By default, this new table space will be created using buffered I/O; the FILE
SYSTEM CACHING clause is implied.
Example 2: CREATE TABLESPACE <table space name> ... NO FILE SYSTEM CACHING
The new NO FILE SYSTEM CACHING clause indicates that file system level
caching will be OFF for this particular table space.
Example 3: ALTER TABLESPACE <table space name> ... NO FILE SYSTEM CACHING
This statement disables file system level caching for an existing table space.
Example 4: ALTER TABLESPACE <table space name> ... FILE SYSTEM CACHING
This statement enables file system level caching for an existing table space.
This method of disabling file system caching provides control of the I/O mode,
buffered or non-buffered, at the table space level. Note that I/O access to Long
Field (LF) and Large Objects (LOBs) will be buffered for both SMS and DMS
containers.
Some UNIX platforms support the disabling of file system caching at a file system
level by using the MOUNT option. Consult your operating system documentation
for more information. However, it is important to understand the difference
between disabling file system caching at the table space level and at the file system
level. At the table space level, the DB2 database manager controls which files are to
be opened with and without file system caching. At the file system level, every file
residing on that particular file system will be opened without file system caching.
Some platforms such as AIX have certain requirements before you can use this
feature, such as serialization of read and write access. While the DB2 database
manager adheres to these requirements, if the target file system contains non-DB2
files, before enabling this feature, consult your operating system documentation for
any requirements.
In DB2 Version 8.1 FixPak 4, the registry variable DB2_DIRECT_IO disables file
system caching for all SMS containers except for Long Field (LF), Large Objects
(LOBs), and temporary table spaces on AIX JFS2. With the ability to enable this
feature at the table space level, starting in DB2 Version 8.2, this registry variable
has been deprecated. Setting this registry variable in DB2 Version 9.1 is equivalent
to altering all table spaces, SMS and DMS, with the NO FILE SYSTEM CACHING
clause.
Related concepts:
v “Buffer pool management” in Performance Guide
Related reference:
v “ALTER TABLESPACE statement” in SQL Reference, Volume 2
v “CREATE TABLESPACE statement” in SQL Reference, Volume 2
Related reference:
v “CREATE TABLESPACE statement” in SQL Reference, Volume 2
Prerequisites:
You must know the device or file names of the containers you are going to
reference when creating your table spaces. You must know the amount of space
associated with each device or file name that is to be allocated to the table space.
You will need the correct permissions to read and write to the container.
Procedure:
The physical and logical methods for identifying direct disk access differs based on
operating system:
v On the Windows operating systems::
To specify a physical hard drive, use the following syntax:
\\.\PhysicalDriveN
where N represents one of the physical drives in the system. In this case, N
could be replaced by 0, 1, 2, or any other positive integer:
\\.\PhysicalDrive5
To specify a logical drive, that is, an unformatted database partition, use the
following syntax:
\\.\N:
where N: represents a logical drive letter in the system. For example, N: could
be replaced by E: or any other drive letter. To overcome the limitation imposed
by using a letter to identify the drive, you can use a globally unique identifier
(GUID) with the logical drive.
For Windows, there is a new method for specifying DMS raw table space
containers. Volumes (that is, basic disk database partitions or dynamic volumes)
Related tasks:
v “Setting up a direct disk access device on Linux” on page 164
Prerequisites:
Before setting up raw I/O on Linux, one or more free IDE or SCSI disk database
partitions are required.
Restrictions:
Procedure:
In this example, the raw database partition to be used is /dev/sda5. It should not
contain any valuable data.
1. Calculate the number of 4 096-byte pages in this database partition, rounding
down if necessary. For example:
# fdisk /dev/sda
Command (m for help): p
DB2 first queries the partition to see whether there is a file system on it; if yes,
the partition is not treated as a RAW device, and DB2 performs normal file
system I/O operations on the partition.
Table spaces on raw devices are also supported for all other page sizes supported
by the DB2 database manager.
Prior to Version 9, direct disk access using a raw controller utility on Linux was
used. This method has been deprecated by the operating system, and it’s use is
The prior method would have required you to ″bind″ a disk partition to a raw
controller, then specify that raw controller to DB2 using the CREATE
TABLESPACE command:
CREATE TABLESPACE dms1
MANAGED BY DATABASE
USING (DEVICE ’/dev/raw/raw1’ 1170736)
Related tasks:
v “Attaching a direct disk access device” on page 163
However, you might need a buffer pool that has different characteristics than the
default buffer pool. You can create new buffer pools for the database manager to
use. Buffer pools improve database system performance immediately.
The page sizes that you specify for your table spaces should determine the page
sizes that you choose for your buffer pools. The choice of page size used for a
buffer pool is important because you cannot alter the page size after you create a
buffer pool.
Prerequisites:
Before you create a new buffer pool, resolve the following questions:
v What buffer pool name do you want to use?
v Will the buffer pool is to be created immediately or following the next time that
the database is deactivated and reactivated?
v Do you want to associate the buffer pool with a subset of all database partitions
that make up the database?
v What page size do you want to specify for the buffer pool?
v Will you specify a fixed size for the buffer pool, or will you allow DB2 to adjust
the size of the buffer pool in response to the requirements of your workload? It
is recommended that you allow DB2 to tune your buffer pool automatically by
leaving the size parameter unspecified during buffer pool creation
Procedure:
1. Open the Create Buffer Pool window: From the Control Center, expand the object tree
until you find the Buffer Pools folder. Right-click the Buffer Pools folder and select
Create from the pop-up menu. The Create Buffer Pool window opens.
2. Type a new name for the buffer pool.
3. Specify the size of the pages to be used for the buffer pool. The valid values are 4 KB, 8
KB, 16 KB, and 32 KB.
4. Type the size of the buffer pool in pages.
5. Specify whether to use the default buffer pool size.
6. Specify whether to create the buffer pool immediately (this is the default setting), or
whether to create it the next time that the database is restarted.
Related concepts:
v “Self tuning memory” in Performance Guide
Related tasks:
v “Altering a buffer pool” on page 283
Related reference:
v “CREATE BUFFERPOOL statement” in SQL Reference, Volume 2
Prerequisites:
To create or alter a buffer pool, you must have either SYSADM or SYSCTRL
authority.
Procedure:
1. Open the Create Buffer Pool window: From the Control Center, expand the
object tree until you find the Buffer Pools folder. Right-click the Buffer Pools
folder and select Create from the pop-up menu. The Create Buffer Pool
window opens.
2. Type a new name for the buffer pool.
Related tasks:
v “Altering a buffer pool” on page 283
v “Creating a buffer pool” on page 166
Creating schemas
Schemas are used to organize object ownership within the database.
Creating a schema
While organizing your data into tables, it might also be beneficial to group tables
and other related objects together. This is done by defining a schema through the
use of the CREATE SCHEMA statement. Information about the schema is kept in
the system catalog tables of the database to which you are connected. As other
objects are created, they can be placed within this schema.
Unqualified access to objects within a schema is not allowed since the schema is
used to enforce uniqueness in the database. This becomes clear when considering
the possibility that two users could create two tables (or other objects) with the
same name. Without a schema to enforce uniqueness, ambiguity would exist if a
third user attempted to query the table. It is not possible to determine which table
to use without some further qualification.
The definer of any objects created as part of the CREATE SCHEMA statement is
the schema owner. This owner can GRANT and REVOKE schema privileges to
other users.
To allow another user to access a table without entering a schema name as part of
the qualification on the table name requires that a view be established for that user.
The definition of the view would define the fully-qualified table name including
the user’s schema; the user would simply need to query using the view name. The
view would be fully-qualified by the user’s schema as part of the view definition.
Prerequisites:
The database tables and other related objects that are to be grouped together must
exist.
To issue the CREATE SCHEMA statement, you must have DBADM authority.
To create a schema with any valid name, you need SYSADM or DBADM authority.
The new schema name cannot already exist in the system catalogs and it cannot
begin with ″SYS″.
Procedure:
If a user has SYSADM or DBADM authority, then the user can create a schema
with any valid name. When a database is created, IMPLICIT_SCHEMA authority is
granted to PUBLIC (that is, to all users).
1. Expand the object tree until you see the Schema folder within a database.
2. Right-click the Schema folder, and click Create.
3. Complete the information for the new schema, and click OK.
Related concepts:
v “Implicit schema authority (IMPLICIT_SCHEMA) considerations” on page 513
v “Grouping objects by schema” on page 6
v “Schema privileges” on page 514
Related tasks:
v “Setting a schema” on page 169
Related reference:
v “CREATE SCHEMA statement” in SQL Reference, Volume 2
Setting a schema
Once you have several schemas in existence, you might want to designate one as
the default schema for use by unqualified object references in dynamic SQL and
XQuery statements issued from within a specific DB2 connection.
Procedure:
To establish a default schema: Set the special register CURRENT SCHEMA to the
schema you want to use as the default. For example:
SET CURRENT SCHEMA = ’SCHEMA01’
The initial value of the CURRENT SCHEMA special register is equal to the
authorization ID of the current session user.
Related concepts:
v “Schemas” in SQL Reference, Volume 1
Related reference:
v “CURRENT SCHEMA special register” in SQL Reference, Volume 1
v “Reserved schema names and reserved words” in SQL Reference, Volume 1
v “SET SCHEMA statement” in SQL Reference, Volume 2
Copying a schema
Use the ADMIN_COPY_SCHEMA procedure to copy a single schema within the
same database or use the db2move utility with the -co COPY action to copy a
single schema or multiple schemas from a source database to a target database.
Most database objects from the source schema are copied to the target database
under the new schema. The db2move utility and the ADMIN_COPY_SCHEMA
procedure allow you to quickly make copies of a database schema. Once a model
schema is established, you can use it as a template for creating new versions.
Restrictions:
v The db2move utility attempts to successfully copy all allowable schema objects
with the exception of the following types:
– table hierarchy
– staging tables (not supported by the load utility in multiple partition database
environments)
– jars (Java routine archives)
– nicknames
– packages
– view hierarchies
– object privileges (All new objects are created with default authorizations)
– statistics (New objects do not contain statistics information)
– index extensions (user-defined structured type related)
– user-defined structured types and their transform functions
v If an object of one of the unsupported types is detected in the source schema, an
entry is logged to an error file, indicating that a unsupported object type is
detected. The COPY operation will still succeed; this logged entry is meant to
inform users of objects not copied by this operation.
v Objects that are not coupled with a schema such as table spaces, and event
monitors, are not operated on during a copy schema operation.
v When copying a replicated table, the new copy of the table is not enabled for
replication. The table is re-created as a regular table.
v The source database must be cataloged if it does not reside in the same instance
as the target database.
DIAGTEXT
--------------------------------------------------------------------------------
[IBM][CLI Driver][DB2/LINUXX8664] SQL0290N Table space access is not allowed.
STATEMENT
--------------------------------------------------------------------------------
set integrity for "SALES "."ADVISE_INDEX" , "SALES"."ADVISE_MQT" , "SALES"."
1 record(s) selected.
Finally, issue the SET INTEGRITY statement for each of the tables listed to take
each table out of the Set Integrity Pending state.
Procedure:
This utility must be invoked on the target system if source and target schemas
reside on different systems. For copying schemas from one database to another,
this action requires a list of schema names to be copied from a source database,
separated by commas, and a target database name.
To copy a schema using the command line processor (CLP), use the following
syntax:
Example 2:
The following is an example of a db2move -co COPY operation that copies schema
BAR into FOO from the sample database to the target database:
The new (target) schema objects are created using the same object names as the
objects in the source schema, but with the target schema qualifier. It is possible to
Example 3:
The following example shows you to specify specific table space name mappings
to be used instead of the table spaces from the source system during a COPY
operation. You can specify the SYS_ANY keyword to indicate that the target table
space should be chosen using the default table space selection algorithm. In this
case, the db2move tool chooses any available table space to be used as the target.
For example:
The SYS_ANY keyword can be used for all table spaces, or you can specify specific
mappings for some table spaces, and the default table space selection algorithm for
the remaining. For example:
db2move sample COPY -sn BAR -co target_db target schema_map "
((BAR,FOO))" tablespace_map "((TS1, TS2),(TS3, TS4), SYS_ANY)"
-u userid -p password
This indicates that table space TS1 is mapped to TS2, TS3 is mapped to TS4, but
the remaining table spaces use a default table space selection algorithm.
Example 4:
You can also change the owner of each new object created in the target schema
after a successful COPY. The default owner of the target objects is the connect user;
if this option is specified, ownership is transferred to a new owner as
demonstrated in the following example:
Related concepts:
v “Schemas” in SQL Reference, Volume 1
Related tasks:
v “Dropping a schema” on page 294
v “Restarting a failed copy schema operation” on page 173
v “Setting a schema” on page 169
Related reference:
v “ADMIN_COPY_SCHEMA procedure – Copy a specific schema and its objects”
in Administrative SQL Routines and Views
v “ADMIN_DROP_SCHEMA procedure – Drop a specific schema and its objects”
in Administrative SQL Routines and Views
v “db2move - Database movement tool command” in Command Reference
The db2move utility reports errors and messages to the user using message and
error files. Copy schema operations use the COPYSCHEMA_<timestamp>.MSG
message file, and the COPYSCHEMA_<timestamp>.err error file. These files are
created in the current working directory. The current time is appended to the
filename to ensure uniqueness of the files. It is up to the user to delete these
message and error files when they are no longer required.
Object types:
The type of object being copied can be categorized as one of two types : physical
objects and business objects.
Failures which occur during the recreate of physical objects on the target database,
are logged in the error file COPYSCHEMA_<timestamp>.err. For each failing
object, the error file contains information such as object name, object type, DDL
text, time stamp, and a string formatted sqlca (sqlca field names, followed by their
data values).
Example 1:
2.schema: FOO.T3
Type: TABLE
Error Msg: SQL0204N FOO.V1 is an undefined name.
Timestamp: 2005-05-18-14.08.35.68
DDL: create table FOO.T3
If any errors creating physical objects are logged at the end of the recreate phase
and before attempting the load phase, the db2move utility fails and an error is
returned. All object creation on the target database is rolled back, and all internally
created tables are cleaned up on the source database. The rollback occurs at the
end of the recreate phase after attempting to recreate each object, rather than after
the first failure, in order to gather all possible errors into the error file. This allows
Failures that occur during the recreation of business objects on the target database,
do not cause the db2move utility to fail. Instead, these failures are logged in the
COPYSCHEMA_<timestamp>.err error file. Upon completion of the db2move
utility, you can examine the failures, address any issues, and manually recreate
each failed object (the DDL is provided in the error file for convenience).
If an error occurs while db2move is attempting to repopulate table data using the
load utility, the db2move utility will not fail. Rather, generic failure information is
logged to the COPYSCHEMA_<timestamp>.err file (object name, object type, DDL
text, time stamp, sqlca, and so on), and the fully qualified name of the table is
logged into another file, ″LOADTABLE_<timestamp>.err″. Each table is listed per
line to satisfy the db2move -tf option format, similar to the following:
"FOO"."TABLE1"
"FOO 1"."TAB 444"
After addressing the issues causing the loads operations to fail (described in the
error file), you can reissue the db2move -COPY command using the ’-tf’ option
(passing in the LOADTABLE.err filename) as shown in the following syntax:
Example 2:
You can also input the table names manually using the -tn option, as shown in the
following syntax:
Example 3:
db2move sourcedb COPY -tn "FOO"."TABLE1","FOO 1"."TAB 444",
-co TARGETDB mytargetdb -mode load_only
Internal operations such as memory errors, or file system errors can cause the
db2move utility to fail.
Should the internal operation failure occur during the ddl recreate phase, all
successfully created objects are rolled back from the target schema, and all
internally created tables such as the DMT table and the db2look table, are cleaned
up on the source database.
Should the internal operation failure occur during the load phase, all successfully
created objects remain on the target schema. All tables that experience a failure
during a load operation, and all tables which have not yet been loaded are logged
in the LOADTABLE.err error file. You can then issue the db2move COPY
command using the LOADTABLE.err as discussed in Example 2. If the db2move
utility abends (for example a system crash, the utility traps, the utility is killed,
and so on), then the information regarding which tables still need to be loaded is
Regardless of what error you might encounter during an attempted copy schema
operation, you always have the option of dropping the target schema using the
ADMIN_DROP_SCHEMA procedure and reissuing the db2move COPY command.
Related reference:
v “ADMIN_COPY_SCHEMA procedure – Copy a specific schema and its objects”
in Administrative SQL Routines and Views
v “ADMIN_DROP_SCHEMA procedure – Drop a specific schema and its objects”
in Administrative SQL Routines and Views
These tables are updated during the operation of a database; for example, when a
table is created. You cannot explicitly create or drop these tables, but you can
query and view their content. When the database is created, in addition to the
system catalog table objects, the following database objects are defined in the
system catalog:
v A set of routines (functions and procedures) in the schemas SYSIBM, SYSFUN,
and SYSPROC.
v A set of read-only views for the system catalog tables is created in the SYSCAT
schema.
v A set of updatable catalog views is created in the SYSSTAT schema. These
updatable views allow you to update certain statistical information to investigate
the performance of a hypothetical database, or to update statistics without using
the RUNSTATS utility.
After your database has been created, you might want to limit the access to the
system catalog views.
Related concepts:
v “Catalog views” in SQL Reference, Volume 1
v “Functions overview” in SQL Reference, Volume 1
v “User-defined functions” in SQL Reference, Volume 1
Related tasks:
v “Securing the system catalog view” on page 613
Related reference:
v “Functions” in SQL Reference, Volume 1
Note: By default directory files, including the database directory, are cached in
memory using the “Directory Cache Support (dir_cache)” configuration
parameter. When directory caching is enabled, a change made to a directory
(for example, using a CATALOG DATABASE or UNCATALOG
DATABASE command) by another application might not become effective
until your application is restarted. To refresh the directory cache used by a
command line processor session, issue a db2 terminate command.
In addition to the application level cache, a database manager level cache is also
used for internal, database manager look-up. To refresh this “shared” cache, issue
the db2stop and db2start commands.
Prerequisites:
Procedure:
To catalog a database with a different alias name using the command line
processor, use the CATALOG DATABASE command. For example, the following
command line processor command catalogs the personl database as humanres:
CATALOG DATABASE personl AS humanres
WITH "Human Resources Database"
Here, the system database directory entry will have humanres as the database alias,
which is different from the database name (personl).
To catalog a database on an instance other than the default using the command
line processor, use the CATALOG DATABASE command. In the following
example, connections to database B are to INSTNC_C. The instance instnc_c must
already be cataloged as a local node before attempting this command.
CATALOG DATABASE b as b_on_ic AT NODE instnc_c
Related tasks:
Related reference:
v “CATALOG DATABASE command” in Command Reference
v “TERMINATE command” in Command Reference
v “dir_cache - Directory cache support configuration parameter” in Performance
Guide
v “UNCATALOG DATABASE command” in Command Reference
Prerequisites:
Procedure:
1. Open the Add System window: From the Control Center, right-click the All
Systems folder and select Add from the pop-up menu. The Add System
window opens.
2. To add a DB2 system, click the DB2 radio button. Then do the following:
a. Specify the physical machine, server system, or workstation where the
target database is located. The system name on the server system is defined
by the DB2SYSTEM DAS configuration parameter. This is the value that you
should use. If your network supports TCP/IP, then you can use discovery
to help complete the remaining fields on this window.
b. Type the host name or IP (Internet Protocol) address where the target
database resides. Issuing the TCP/IP hostname command on the server
system retrieves the server’s host name. Issuing a ping hostname command
will return the IP address of the host.
c. Specify a local nickname for the remote node where the database is located.
The node name you choose must not already exist in the node directory or
the admin node directory.
d. Specify the operating system where the target database is located.
e. Optional: Select LDAP, if the Lightweight Directory Access Protocol (LDAP)
is enabled and you want to catalog the system in the LDAP directory.
3. To add an IMSplex, click the IMS radio button. Then do the following:
a. Specify the IMSplex name that you want to add. The name must match the
identifier you have specified in the CSLSIxxx, CSLRIxxx, DFSCGxxx, and
CSLOIxxx proclib members for the IMSPLEX= parameter.
b. Type the TCP/IP address of the IMSplex host and the port number that
binds to the socket that IMS Connect manages. Valid port numbers are
defined in the PORTID parameter of the HWSCFGxxx proclib member.
c. Select OS/390 or z/OS as the operating system. This is the only option, if
you are working with an IMSplex.
Related reference:
v “CATALOG DATABASE command” in Command Reference
Related reference:
v “CREATE DATABASE command” in Command Reference
For each database created, an entry is added to the directory containing the
following information:
v The database name provided with the CREATE DATABASE command
v The database alias name (which is the same as the database name, if an alias
name is not specified)
v The database comment provided with the CREATE DATABASE command
v The location of the local database directory
v An indicator that the database is indirect, which means that it resides on the
current database manager instance
v Other system information.
Related tasks:
v “Cataloging a database” on page 176
v “Enabling database partitioning in a database” on page 9
Related reference:
v “CREATE DATABASE command” in Command Reference
Prerequisites:
Before viewing either the local or system database directory files, you must have
previously created an instance and a database.
Procedure:
To see the contents of the local database directory file, issue the following
command, where <location> specifies the location of the database:
LIST DATABASE DIRECTORY ON <location>
To see the contents of the system database directory file, issue the LIST
DATABASE DIRECTORY command without specifying the location of the
database directory file.
Related reference:
v “LIST DATABASE DIRECTORY command” in Command Reference
Node directory
The database manager creates the node directory when the first database partition is
cataloged. To catalog a database partition, use the CATALOG NODE command. To
list the contents of the local node directory, use the LIST NODE DIRECTORY
command. The node directory is created and maintained on each database client.
The directory contains an entry for each remote workstation having one or more
databases that the client can access. The DB2 client uses the communication end
point information in the node directory whenever a database connection or
instance attachment is requested.
The entries in the directory also contain information on the type of communication
protocol to be used to communicate from the client to the remote database
partition. Cataloging a local database partition creates an alias for an instance that
resides on the same computer.
Related reference:
v “CATALOG LOCAL NODE command” in Command Reference
v “CATALOG NAMED PIPE NODE command” in Command Reference
Procedure:
Related concepts:
v “Local database directory” on page 178
v “System database directory” on page 178
Related tasks:
v “Searching the LDAP servers” on page 585
v “Viewing the local or system database directory files” on page 179
Related reference:
v “LIST DATABASE DIRECTORY command” in Command Reference
Prerequisites:
To catalog a database, you must have SYSADM or SYSCTRL authority; or, you
must have the catalog_noauth configuration parameter set to YES.
Procedure:
To update the directories using the command line processor, do the following:
1. Use one of the following commands to update the node directory:
v For a node having an APPC connection:
db2 CATALOG APPC NODE <nodename>
REMOTE <symbolic_destination_name> SECURITY <security_type>
For example:
db2 CATALOG TCPIP NODE MVSIPNOD REMOTE MVSHOST SERVER DB2INSTC
The default port used for TCP/IP connections on DB2 for OS/390 and z/OS
is 446.
2. If you work with DB2 Connect, you will have to consider updating the DCS
directory using the CATALOG DCS DATABASE command.
If you have remote clients, you must also update directories on each remote client.
Related concepts:
v “DCS directory values” in DB2 Connect User’s Guide
v “System database directory values” in DB2 Connect User’s Guide
Related reference:
v “CATALOG DATABASE command” in Command Reference
v “CATALOG DCS DATABASE command” in Command Reference
v “CATALOG TCPIP/TCPIP4/TCPIP6 NODE command” in Command Reference
Related concepts:
v “Discovery of administration servers, instances, and databases” on page 107
v “Lightweight Directory Access Protocol (LDAP) overview” on page 573
The database recovery log can be used to ensure that a failure (for example, a
system power outage or application error) does not leave the database in an
inconsistent state. In case of a failure, the changes already made but not committed
are rolled back, and all committed transactions, which might not have been
physically written to disk, are redone. These actions ensure the integrity of the
database.
Related concepts:
v “Understanding recovery logs” in Data Recovery and High Availability Guide and
Reference
Related concepts:
v “Interpreting administration notification log file entries” in Troubleshooting Guide
v “Interpreting diagnostic log file entries” in Troubleshooting Guide
v “Interpreting the db2diag.log file informational record” in Troubleshooting Guide
Related reference:
v “notifylevel - Notify level configuration parameter” in Performance Guide
v “diaglevel - Diagnostic error capture level configuration parameter” in
Performance Guide
Binding a utility creates a package, which is an object that includes all the
information needed to process specific SQL and XQuery statements from a single
source file.
Note: If you want to use these utilities from a client, you must bind them
explicitly.
Procedure:
To bind or rebind the utilities to a database, issue the following commands using
the command line processor:
connect to sample
bind @db2ubind.lst
Note: You must be in the directory where these files reside to create the packages
in the sample database. The bind files are found in the bnd subdirectory of
the sqllib directory. In this example, sample is the name of the database.
Related tasks:
v “Creating a database” on page 113
Related reference:
v “BIND command” in Command Reference
For information on generating DDL statements in DB2 for OS/390, see the DB2 for
z/OS and OS/390 help.
To generate DDL for database objects, you need SELECT privilege on the system
catalogs.
From the Objects page, specify which statements you want to generate.
On the Statement page and select the appropriate check boxes, as follows:
v Database objects: Generates DDL statements for the database objects, such as
tables, indexes, views, triggers, aliases, UDFs, data types, and sequences,
excluding any table spaces, database partition groups, and buffer pools that are
user-defined.
v Table spaces, database partition groups, and buffer pools: Generates the DDL
statements for these objects, excluding any of these objects that are user-defined.
v Authorization statements: Generates SQL authorization (GRANT) statements for
the database objects.
v Database statistics: Generates SQL update statements for updating the statistics
tables in the database.
v Update statistics: Only available if you have selected Database statistics.
Generates the RUNSTATS command, which updates the statistics on the
database that is generated.
Note: Choosing not to update the statistics allows you to create an empty
database that the optimizer will treat as containing data.
v Include COMMIT statements after every table: Generates a COMMIT statement
after the update statements for each table. The COMMIT statements are
generated only when you select Database statistics.
v Gather configuration parameters: Gathers any configuration parameters and
registry variables that are used by the SQL optimizer.
v XML Schema Repository (XSR) objects: XML schemas, DTDs, external entities:
Generates statements to re-create XSR objects. If you select this check box, you
must also specify the directory into which the generated XSR objects will be
recreated.
Select the objects on which you want to base the generated DDL:
To limit the scope of the generation, and use the following options on the Object
page:
v To limit DDL generation to objects created by a particular user, specify that
user’s ID in the User field.
v To limit the DDL generation to objects in a particular schema, specify that
schema in the Schema field.
v To limit the DDL generation to objects related to specific tables, select the
Generate DDL for selected tables only check box. Then, select the tables you
want in the Available tables list box and move them to the Selected tables list
box.
You can use the Scheduler Settings page of the Tools Settings notebook to set the
default scheduling scheme. Note that if you set a default scheduling scheme, you
can still override it at the task level.
Optional: If you want to view the db2look command that is used to generate the
DDL script, click Show Command.
Click Generate to generate DDL script. From the window that opens, you can do
the following:
v Copy the script to the Command Editor
v Save the script to the file-system
v Run the script, and optionally save it to the Task Center.
Related concepts:
v “DDL Data definition language” in Developing SQL and External Routines
v “Savepoints and Data Definition Language (DDL)” in Developing SQL and
External Routines
Prerequisites:
Procedure:
1. Expand the object tree until you find the database that you want to quiesce or
unquiesce.
2. Right-click on the desired database and select Quiesce or Unquiesce from the pop-up
menu. The database will be quiesced or unquiesced immediately.
Related reference:
v “QUIESCE command” in Command Reference
v “UNQUIESCE command” in Command Reference
To implement data compression in a database system, there are two methods you
can employ:
Value compression
This method optimizes space usage for the representation of data, and the
storage structures used internally by the database management system
(DBMS) to store data. Value compression involves removing duplicate
entries for a value, and only storing one copy. The stored copy keeps track
of the location of any references to the stored value.
Row compression
This method compresses data rows by replacing repeating patterns that
span multiple column values within a row with shorter symbol strings.
Related concepts:
v “Space requirements for database objects” in Administration Guide: Planning
v “Data row compression” on page 188
v “Space value compression for existing tables” on page 295
v “Space value compression for new tables” on page 187
When VALUE COMPRESSION is used, NULLs and zero-length data that has been
assigned to defined variable-length data types (VARCHAR, VARGRAPHICS,
LONG VARCHAR, LONG VARGRAPHIC, BLOB, CLOB, and DBCLOB) will not be
stored on disk. Only overhead values associated with these data types will take up
disk space.
Related concepts:
v “Data row compression” on page 188
v “Space compression for tables” on page 187
v “Space value compression for existing tables” on page 295
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
Data row compression is not applicable to Index, Long, LOB and XML data. Row
compression and table data replication support are not compatible.
Row compression statistics can be generated using the RUNSTATS command and
are stored in the system catalog table SYSCAT.TABLES. A compression estimation
option is available with the INSPECT utility.
Related concepts:
v “Space compression for tables” on page 187
v “Space value compression for existing tables” on page 295
v “Space value compression for new tables” on page 187
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “INSPECT command” in Command Reference
Table creation
The CREATE TABLE statement gives the table a name, which is a qualified or
unqualified identifier, and a definition for each of its columns. You can store each
table in a separate table space, so that a table space contains only one table. If a
table will be dropped and created often, it is more efficient to store it in a separate
table space and then drop the table space instead of the table. You can also store
many tables within a single table space. In a partitioned database environment, the
table space chosen also defines the database partition group and the database
partitions on which table data is stored.
The table does not contain any data at first. To add rows of data to it, use one of
the following:
v The INSERT statement
v The LOAD or IMPORT commands
Adding data to a table can be done without logging the change. The NOT
LOGGED INITIALLY clause on the CREATE TABLE statement prevents logging
the change to the table. Any changes made to the table by an INSERT, DELETE,
UPDATE, CREATE INDEX, DROP INDEX, or ALTER TABLE operation in the same
unit of work in which the table is created are not logged. Logging begins in
subsequent units of work.
Note: The maximum of 500 columns is true when using a 4 KB page size. The
maximum is 1012 columns when using an 8 KB, 16 KB, or 32 KB page size.
A column definition includes a column name, data type, and any necessary null
attribute, or default value (optionally chosen by the user).
The column name describes the information contained in the column and should
be something that will be easily recognizable. It must be unique within the table;
however, the same name can be used in other tables.
The data type of a column indicates the length of the values in it and the kind of
data that is valid for it. The database manager uses character string, numeric, date,
time and large object data types. Graphic string data types are only available for
database environments using multi-byte character sets. In addition, columns can be
defined with user-defined distinct types.
The null attribute specification indicates whether or not a column can contain null
values.
Related concepts:
v “Import Overview” in Data Movement Utilities Guide and Reference
Chapter 4. Creating tables and other related table objects 189
Related tasks:
v “Creating and populating a table” on page 217
v “Loading data” in Data Movement Utilities Guide and Reference
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
Prerequisites:
To create a table, you must have at least one of the following privileges:
v CREATETAB privilege on the database and USE privilege on the table space,
and either:
– IMPLICIT_SCHEMA authority on the database if the implicit or explicit
schema name of the table does not exist
– CREATEIN privilege on the schema if the schema name of the table exists
v SYSADM or DBADM authority
Procedure:
To create a table:
1. Open the Create Table wizard: From the Control Center, expand the object tree
until you find the Tables folder. Right-click the Tables folder and select Create
from the pop-up menu. The Create Table wizard opens.
2. Complete each of the applicable wizard pages. Click the wizard overview link
on the first page for more information. The Finish push button is enabled
when you specify enough information for the wizard to create a table.
Related tasks:
v “Creating a table in a partitioned database environment” on page 191
v “Creating a table in multiple table spaces” on page 190
Prerequisites:
All table spaces must exist before the CREATE TABLE statement is run.
Restrictions:
The separation of the parts of the table can only be done using DMS table spaces.
1. Expand the object tree until you see the Tables folder.
2. Right-click the Tables folder, and select Create from the pop-up menu.
3. Type the table name and click Next.
4. Select columns for your table.
5. On the Table space page, click Use separate index space and Use separate long space,
specify the information, and click Finish.
To create a table in multiple table spaces using the command line, enter:
CREATE TABLE <name>
(<column_name> <data_type> <null_attribute>)
IN <table_space_name>
INDEX IN <index_space_name>
LONG IN <long_space_name>
The following example shows how the EMP_PHOTO table could be created to
store the different parts of the table in different table spaces:
CREATE TABLE EMP_PHOTO
(EMPNO CHAR(6) NOT NULL,
PHOTO_FORMAT VARCHAR(10) NOT NULL,
PICTURE BLOB(100K) )
IN RESOURCE
INDEX IN RESOURCE_INDEXES
LONG IN RESOURCE_PHOTO
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
Creating a table that will be a part of several database partitions is specified when
you are creating the table. There is an additional option when creating a table in a
partitioned database environment: the distribution key. A distribution key is a key
that is part of the definition of a table. It determines the database partition on
which each row of data is stored.
If you do not specify the distribution key explicitly, the following defaults are
used. Ensure that the default distribution key is appropriate.
Prerequisites:
Before creating a table that will be physically divided or distributed, you need to
consider the following:
v Table spaces can span more than one database partition. The number of database
partitions they span depends on the number of database partitions in a database
partition group.
v Tables can be collocated by being placed in the same table space or by being
placed in another table space that, together with the first table space, is
associated with the same database partition group.
Restrictions:
The size limit for one database partition of a table is 64 GB, or the available disk
space, whichever is smaller. (This assumes a 4 KB page size for the table space.)
The size of the table can be as large as 64 GB (or the available disk space) times
the number of database partitions. If the page size for the table space is 8 KB, the
size of the table can be as large as 128 GB (or the available disk space) times the
number of database partitions. If the page size for the table space is 16 KB, the size
of the table can be as large as 256 GB (or the available disk space) times the
number of database partitions. If the page size for the table space is 32 KB, the size
of the table can be as large as 512 GB (or the available disk space) times the
number of database partitions.
Procedure:
Following is an example:
CREATE TABLE MIXREC (MIX_CNTL INTEGER NOT NULL,
MIX_DESC CHAR(20) NOT NULL,
MIX_CHR CHAR(9) NOT NULL,
MIX_INT INTEGER NOT NULL,
MIX_INTS SMALLINT NOT NULL,
MIX_DEC DECIMAL NOT NULL,
MIX_FLT FLOAT NOT NULL,
MIX_DATE DATE NOT NULL,
In the preceding example, the table space is MIXTS12 and the distribution key is
MIX_INT. If the distribution key is not specified explicitly, it is MIX_CNTL. (If no
primary key is specified and no distribution key is defined, the distribution key is
the first non-long column in the list.)
A row of a table, and all information about that row, always resides on the same
database partition.
Related concepts:
v “Database partition group design” in Administration Guide: Planning
v “Database partition groups” in Administration Guide: Planning
v “Table collocation” in Administration Guide: Planning
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
You can create a partitioned table by using the Create Table wizard in the DB2
Control Center or by using the CREATE TABLE statement.
Prerequisites:
To create a table, the privileges held by the authorization ID of the statement must
include at least one of the following authorities or privileges:
v CREATETAB authority on the database and USE privilege on all the table spaces
used by the table, as well as one of:
– IMPLICIT_SCHEMA authority on the database, if the implicit or explicit
schema name of the table does not exist
– CREATEIN privilege on the schema, if the schema name of the table refers to
an existing schema
v SYSADM or DBADM authority
Procedure:
You can create a partitioned table from the DB2 Control Center or from the DB2
command line processor (CLP).
To use the Create Table wizard in the DB2 Control Center to create a partitioned
table:
1. Expand the object tree until you see the Tables folder.
To use the CLP to create a partitioned table, issue the CREATE TABLE statement:
CREATE TABLE <NAME> (<column_name> <data_type> <null_attribute>) IN
<table space list> PARTITION BY RANGE (<column expression>)
STARTING FROM <constant> ENDING <constant> EVERY <constant>
For example, the following statement creates a table where rows with a ≥ 1 and a ≤
20 are in PART0 (the first data partition), rows with 21 ≤ a ≤ 40 are in PART1 (the
second data partition), up to 81 ≤ a ≤ 100 are in PART4 (the last data partition).
CREATE TABLE foo(a INT)
PARTITION BY RANGE (a) (STARTING FROM (1)
ENDING AT (100) EVERY (20))
Related concepts:
v “Large object behavior in partitioned tables” in SQL Reference, Volume 1
v “Table partitioning” in Administration Guide: Planning
v “Table partitioning keys” in Administration Guide: Planning
v “Understanding clustering index behavior on partitioned tables” in Performance
Guide
v “Data organization schemes in DB2 and Informix databases” in Administration
Guide: Planning
v “Understanding index behavior on partitioned tables” in Performance Guide
v “Optimization strategies for partitioned tables” in Performance Guide
v “Partitioned tables” in Administration Guide: Planning
v “Partitioned materialized query table behavior” on page 206
Related tasks:
v “Rotating data in a partitioned table” on page 339
v “Approaches to defining ranges on partitioned tables” on page 195
v “Adding data partitions to partitioned tables” on page 356
v “Altering partitioned tables” on page 336
v “Creating and populating a table” on page 217
v “Approaches to migrating existing tables and views to partitioned tables” on
page 198
v “Attaching a data partition” on page 346
v “Detaching a data partition” on page 352
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
To completely define the range for each data partition, you must specify sufficient
boundaries. The following is a list of guidelines to consider when defining ranges
on a partitioned table:
v The STARTING clause specifies a low boundary for the data partition range.
This clause is mandatory for the lowest data partition range (although you can
define the boundary as MINVALUE). The lowest data partition range is the data
partition with the lowest specified bound.
v The ENDING (or VALUES) clause specifies a high boundary for the data
partition range. This clause is mandatory for the highest data partition range
(although you can define the boundary as MAXVALUE). The highest data
partition range is the data partition with the highest specified bound.
v If you do not specify an ENDING clause for a data partition, then the next
greater data partition must specify a STARTING clause. Likewise, if you do not
specify a STARTING clause, then the previous data partition must specify an
ENDING clause.
v MINVALUE specifies a value that is smaller than any possible value for the
column type being used. MINVALUE and INCLUSIVE or EXCLUSIVE cannot be
specified together.
v MAXVALUE specifies a value that is larger than any possible value for the
column type being used. MAXVALUE and INCLUSIVE or EXCLUSIVE cannot
be specified together.
v INCLUSIVE indicates that all values equal to the specified value are to be
included in the data partition containing this boundary.
v EXCLUSIVE indicates that all values equal to the specified value are NOT to be
included in the data partition containing this boundary.
v The NULL clause specifies whether null values are to be sorted high or low
when considering data partition placement. By default, null values are sorted
high. Null values in the table partitioning key columns are treated as positive
infinity, and are placed in a range ending at MAXVALUE. If no such data
partition is defined, null values are considered to be out-of-range values. Use the
NOT NULL constraint if you want to exclude null values from table partitioning
key columns. LAST specifies that null values are to appear last in a sorted list of
values. FIRST specifies that null values are to appear first in a sorted list of
values.
v When using the long form of the syntax, each data partition must have at least
one bound specified.
The ranges specified for each data partition can be generated automatically or
manually.
Examples 1 and 2 demonstrate how to use the CREATE TABLE statement to define
and generate automatically the ranges specified for each data partition.
Example 1: Issue a create table statement with the following ranges defined:
CREATE TABLE lineitem (
l_orderkey DECIMAL(10,0) NOT NULL,
l_quantity DECIMAL(12,2),
l_shipdate DATE,
l_year_month INT GENERATED ALWAYS AS (YEAR(l_shipdate)*100 + MONTH(l_shipdate)))
PARTITION BY RANGE(l_shipdate)
(STARTING (’1/1/1992’) ENDING (’12/31/1992’) EVERY 1 MONTH);
This statement results in 12 data partitions each with 1 key value (l_shipdate)
>=(’1/1/1992’), (l_shipdate) < (’3/1/1992’), (l_shipdate) < (’4/1/1992’), (l_shipdate)
< (’5/1/1992’), ..., (l_shipdate) < (’12/1/1992’)(l_shipdate) <= (’12/31/1992’).
The starting value of the first data partition is inclusive because the overall starting
bound (’1/1/1992’) is inclusive (default). Similarly, the ending bound of the last
data partition is inclusive because the overall ending bound (’12/31/1992’) is
inclusive (default). The remaining STARTING values are inclusive and the
remaining ENDING values are all exclusive. Each data partition holds n key values
where n is given by the EVERY clause. Use the formula (start + every) to find
the end of the range for each data partition. The last data partition might have
fewer key values if the EVERY value does not divide evenly into the START and
END range.
Example 2:
This statement results in 10 data partitions each with 100 key values (1 < b <= 101,
101 < b <= 201, ..., 901 < b <= 1000).
The starting value of the first data partition (b > 1 and b <= 101) is exclusive
because the overall starting bound (1) is exclusive. Similarly the ending bound of
the last data partition ( b > 901 b <= 1000) is inclusive because the overall ending
bound (1000) is inclusive. The remaining STARTING values are all exclusive and
the remaining ENDING values are all inclusive. Each data partition holds n key
values where n is given by the EVERY clause. Finally, if both the starting and
ending bound of the overall clause are exclusive, the starting value of the first data
partition is exclusive because the overall starting bound (1) is exclusive. Similarly
the ending bound of the last data partition is exclusive because the overall ending
bound (1000) is exclusive. The remaining STARTING values are all exclusive and
the ENDING values are all inclusive. Each data partition (except the last) holds n
key values where n is given by the EVERY clause.
Manually generated:
Example 3:
This statement partitions on two date columns both of which are generated. Notice
the use of the automatically generated form of the CREATE TABLE syntax and that
only one end of each range is specified. The other end is implied from the adjacent
data partition and the use of the INCLUSIVE option:
CREATE TABLE sales(invoice_date date, inv_month int NOT NULL
GENERATED ALWAYS AS (month(invoice_date)), inv_year INT NOT
NULL GENERATED ALWAYS AS ( year(invoice_date)), item_id int NOT NULL,
cust_id int NOT NULL) PARTITION BY RANGE (inv_year, inv_month)
(PART Q1_02 STARTING (2002,1) ENDING (2002, 3) INCLUSIVE,
PART Q2_02 ENDING (2002, 6) INCLUSIVE,
PART Q3_02 ENDING (2002, 9) INCLUSIVE,
PART Q4_02 ENDING (2002,12) INCLUSIVE,
PART CURRENT ENDING (MAXVALUE, MAXVALUE));
Gaps in the ranges are permitted. The CREATE TABLE syntax supports gaps by
allowing you to specify a STARTING value for a range that does not line up
against the ENDING value of the previous data partition.
Example 4:
Use of the ALTER TABLE statement, which allows data partitions to be added or
removed, can also cause gaps in the ranges.
When you insert a row into a partitioned table, it is automatically placed into the
proper data partition based on its key value and the range it falls within. If it falls
outside of any ranges defined for the table, the insert fails and the following error
is returned to the application:
SQL0327N The row cannot be inserted into table <tablename>
because it is outside the bounds of the defined data partition ranges.
SQLSTATE=22525
Restrictions:
v Table level restrictions:
– Tables created using the automatically generated form of the syntax
(containing the EVERY clause) are constrained to use a numeric or date time
type in the table partitioning key.
v Statement level restrictions:
– MINVALUE and MAXVALUE are not supported in the automatically
generated form of the syntax.
– Ranges are ascending.
– Only one column can be specified in the automatically generated form of the
syntax.
Related concepts:
v “Attributes of detached data partitions” on page 354
v “Data partitions” in Administration Guide: Planning
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering partitioned tables” on page 336
v “Creating partitioned tables” on page 193
v “Dropping a data partition” on page 358
v “Approaches to migrating existing tables and views to partitioned tables” on
page 198
v “Attaching a data partition” on page 346
v “Detaching a data partition” on page 352
v “Rotating data in a partitioned table” on page 339
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “CREATE TABLE statement” in SQL Reference, Volume 2
To migrate data from a DB2 9.1 table into a partitioned table, use the LOAD
command to populate an empty partitioned table.
Example 1:
To avoid creating a third copy of the data in a flat file, issue the LOAD command
to pull the data from an SQL query directly into the new partitioned table.
SELECT * FROM t1;
DECLARE c1 CURSOR FOR SELECT * FROM t1;
LOAD FROM c1 of CURSOR INSERT INTO sales_dp;
SELECT * FROM sales_dp;
You can convert DB2 9.1 data in a UNION ALL view into a partitioned table.
UNION ALL views are used to manage large tables, and achieve easy roll-in and
roll-out of table data while providing the performance advantages of branch
elimination. Table partitioning accomplishes all of these and is easier to administer.
Using the ALTER TABLE ...ATTACH operation, you can achieve conversion with
no movement of data in the base table. Indexes and dependent views or
materialized query tables (MQT’s) must be re-created after the conversion.
Example 2:
Create a partitioned table with a single dummy partition. The range should be
chosen so that it does not overlap with the first data partition to be attached:
Issue the SET INTEGRITY statement to bring the attached data partitions online.
SET INTEGRITY FOR sales_dp IMMEDIATE CHECKED
FOR EXCEPTION IN sales_dp USE sales_ex;
Conversion considerations:
Related concepts:
v “Resolving a mismatch when trying to attach a data partition to a partitioned
table” on page 348
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Altering a table” on page 297
v “Altering or dropping a view” on page 330
Related reference:
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “LOAD command” in Command Reference
v “SYSCAT.COLUMNS catalog view” in SQL Reference, Volume 1
Restrictions:
Materialized query tables defined with REFRESH DEFERRED are not used to
optimize static queries.
Setting the CURRENT REFRESH AGE special register to a value other than zero
should be done with caution. By allowing a materialized query table that might
not represent the values of the underlying base table to be used to optimize the
processing of the query, the result of the query might not accurately represent the
data in the underlying table. This might be reasonable when you know the
underlying data has not changed, or you are willing to accept the degree of error
in the results based on your knowledge of the data.
If you want to create a new base table that is based on any valid fullselect, specify
the DEFINITION ONLY keyword when you create the table. When the create table
Here are some of the key restrictions regarding materialized query tables:
1. You cannot alter a materialized query table.
2. You cannot alter the length of a column for a base table if that table has a
materialized query table.
3. You cannot import data into a materialized query table.
4. You cannot create a unique index on a materialized query table.
5. You cannot create a materialized query table based on the result of a query that
references one or more nicknames.
Procedure:
The creation of a materialized query table with the replication option can be used
to replicate tables across all nodes in a partitioned database environment. These are
known as “replicated materialized query tables”.
To create a materialized query table, you use the CREATE TABLE statement with
the AS fullselect clause and the IMMEDIATE or REFRESH DEFERRED options.
You have the option of uniquely identifying the names of the columns of the
materialized query table. The list of column names must contain as many names as
there are columns in the result table of the full select. A list of column names must
be given if the result table of the full select has duplicate column names or has an
unnamed column. An unnamed column is derived from a constant, function,
expression, or set operation that is not named using the AS clause of the select list.
If a list of column names is not specified, the columns of the table inherit the
names of the columns of the result set of the full select.
When creating a materialized query table, you have the option of specifying
whether the system will maintain the materialized query table or the user will
maintain the materialized query table. The default is system-maintained, which can
be explicitly specified using the MAINTAINED BY SYSTEM clause.
User-maintained materialized query tables are specified using the MAINTAINED
BY USER clause.
The materialized query table, in this situation, can provide pre-computed results. If
you want the refresh of the materialized query table to be deferred, specify the
REFRESH DEFERRED keyword. Materialized query tables specified with
REFRESH DEFERRED will not reflect changes to the underlying base tables. You
should use materialized query tables where this is not a requirement. For example,
if you run DSS queries, you would use the materialized query table to contain
existing data.
You use the CURRENT REFRESH AGE special register to specify the amount of
time that the materialized query table defined with REFRESH DEFERRED can be
used for a dynamic query before it must be refreshed. To set the value of the
CURRENT REFRESH AGE special register, you can use the SET CURRENT
REFRESH AGE statement.
The CURRENT REFRESH AGE special register can be set to ANY, or a value of
99999999999999, to allow deferred materialized queries to be used in a dynamic
query. The collection of nines is the maximum value allowed in this special register
which is a timestamp duration value with a data type of DECIMAL(20,6). A value
of zero (0) indicates that only materialized query tables defined with REFRESH
IMMEDIATE might be used to optimize the processing of a query. In such a case,
materialized query tables defined with REFRESH DEFERRED are not used for
optimization.
Materialized query tables have queries routed to them when the table has been
defined using the ENABLE QUERY OPTIMIZATION clause, and, if a deferred
materialized query table, the CURRENT REFRESH AGE special register has been
set to ANY. However, with user-maintained materialized query tables, the use of
the CURRENT REFRESH AGE special register is not the best method to control the
rerouting of queries. The CURRENT MAINTAINED TABLE TYPES FOR
OPTIMIZATION special register will indicate which kind of cached data will be
available for routing.
With activity affecting the source data, a materialized query table over time will no
longer contain accurate data. You will need to use the REFRESH TABLE statement.
Related concepts:
v “Isolation levels” in SQL Reference, Volume 1
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION special
register” in SQL Reference, Volume 1
v “CURRENT REFRESH AGE special register” in SQL Reference, Volume 1
v “REFRESH TABLE statement” in SQL Reference, Volume 2
v “SET CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION
statement” in SQL Reference, Volume 2
v “SET CURRENT REFRESH AGE statement” in SQL Reference, Volume 2
v “Restrictions on native XML data store” in XML Guide
Note: The query optimizer does not use user-maintained MQTs when selecting an
access plan for static queries.
Restrictions:
See the “Creating a materialized query table” topic for additional restrictions.
Procedure:
To create a materialized query table, you use the CREATE TABLE statement with
the AS fullselect clause and the IMMEDIATE or REFRESH DEFERRED options.
Related tasks:
v “Populating a user-maintained materialized query table” on page 205
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
Prerequisites:
Procedure:
You can populate user-maintained MQTs using triggers, insert operations, or the
LOAD, IMPORT, and DB2 DataPropagator utilities. When performing the initial
population of a user-maintained MQT, you can avoid logging overhead by using
the LOAD or IMPORT utilities.
Note: If you want to populate the MQT with SQL insert operations, you need to
bring the MQT out of the Set Integrity Pending state. However, the
optimizer must first be disabled by using the DISABLE QUERY
OPTIMIZATION option in the SET MATERIALIZED QUERY clause of the
ALTER TABLE statement to ensure that a dynamic SQL query does not
accidentally optimize to this MQT while the data in it is still being
established. Once the MQT has been populated, optimization needs to be
Related reference:
v “IMPORT Command” in Command Reference
v “LOAD command” in Command Reference
The following guidelines and restrictions apply when working with partitioned
MQTs or partitioned tables with detached dependents:
v If you issue a DETACH PARTITION operation and there are any dependent
tables that need to be incrementally maintained with respect to the detached
data partition (these dependents table are referred to as detached dependent
tables), then the newly detached table is initially inaccessible. The table will be
marked L in the TYPE column of the SYSCAT.TABLES catalog view. This is
referred to as a detached table. This prevents the table from being read,
modified or dropped until the SET INTEGRITY statement is run to incrementally
maintain the detached dependent tables. After the SET INTEGRITY statement is
run on all detached dependent tables, the detached table is transitioned to a
regular table where it becomes fully accessible.
v To detect that a detached table is not yet accessible, query the
SYSCAT.TABDETACHEDDEP catalog view. If any inaccessible detached tables
are detected, run the SET INTEGRITY statement with the IMMEDIATE
CHECKED option on all the detached dependents to transition the detached
table to a regular accessible table. If you try to access a detached table before all
its detached dependents are maintained, error code SQL20285N is returned.
v The DATAPARTITIONNUM function cannot be used in an materialized query
table (MQT) definition. Attempting to create an MQT using this function returns
an error (SQLCODE SQL20058N, SQLSTATE 428EC).
v When creating an index on a table with detached data partitions, the index does
not include the data in the detached data partitions unless the detached data
partition has a dependent materialized query table (MQT) that needs to be
incrementally refreshed with respect to it. In this case, the index includes the
data for this detached data partition.
v Altering a table with attached data partitions to an MQT is not allowed.
v Partitioned staging tables are not supported.
v Attaching to an MQT is not directly supported. See Example 1 for details.
Example 1:
Use the SET INTEGRITY statement with the IMMEDIATE CHECKED option to
check the attached data partition for integrity violations. This step is required
before changing the table back to an MQT. The SET INTEGRITY statement with
the IMMEDIATE UNCHECKED option is used to bypass the required full refresh
of the MQT. The index on the MQT is necessary to achieve optimal performance.
The use of exception tables with the SET INTEGRITY statement is recommended,
where appropriate.
Typically, you create a partitioned MQT on a large fact table that is also
partitioned. If you do roll out or roll in table data on the large fact table, you must
adjust the partitioned MQT manually, as demonstrated in Example 2.
Example 2:
Detach the data to be rolled out from the fact table (lineitem) and the MQT and
re-load the staging table li_reuse with the new data to be rolled in:
ALTER TABLE lineitem DETACH PARTITION part0 INTO li_reuse;
LOAD FROM part_mqt_rotate.del OF DEL MESSAGES load.msg REPLACE INTO li_reuse;
ALTER TABLE quan_by_month DETACH PARTITION part0 INTO qm_reuse;
Prune qm_reuse before doing the insert. This deletes the detached data before
inserting the subselect data. This is accomplished with a load replace into the MQT
where the data file of the load is the content of the subselect.
db2 load from datafile.del of del replace into qm_reuse
You can refresh the table manually using INSERT INTO ... (SELECT ...) This is only
necessary on the new data, so the statement should be issued before attaching:
INSERT INTO qm_reuse
(SELECT COUNT(*) AS q_count, l_year_month AS q_year_month
FROM li_reuse
GROUP BY l_year_month);
Now you can roll in the new data for the fact table:
ALTER TABLE lineitem ATTACH PARTITION STARTING ’1/1/1994’
ENDING ’1/31/1994’ FROM TABLE li_reuse;
SET INTEGRITY FOR lineitem ALLOW WRITE ACCESS IMMEDIATE CHECKED FOR
EXCEPTION IN li_reuse USE li_reuse_ex;
After attaching the data partition, the new data must be verified to ensure that it is
in range.
ALTER TABLE quan_by_month ADD MATERIALIZED QUERY
(SELECT COUNT(*) AS q_count, l_year_month AS q_year_month
FROM lineitem
GROUP BY l_year_month)
DATA INITIALLY DEFERRED REFRESH IMMEDIATE;
SET INTEGRITY FOR QUAN_BY_MONTH ALL IMMEDIATE UNCHECKED;
The data is not accessible until it has been validated by the SET INTEGRITY
statement. Although the REFRESH TABLE operation is supported, this scenario
demonstrates the manual maintenance of a partitioned MQT through the ATTACH
PARTITION and DETACH PARTITION operations. The data is marked as
validated by the user through the IMMEDIATE UNCHECKED clause of the SET
INTEGRITY statement.
Related concepts:
v “Partitioned tables” in Administration Guide: Planning
v “Asynchronous index cleanup” in Performance Guide
v “Understanding clustering index behavior on partitioned tables” in Performance
Guide
v “Understanding index behavior on partitioned tables” in Performance Guide
v “Optimization strategies for partitioned tables” in Performance Guide
Related tasks:
v “Creating a materialized query table” on page 201
v “Creating partitioned tables” on page 193
v “Dropping a materialized query or staging table” on page 365
v “Attaching a data partition” on page 346
v “Detaching a data partition” on page 352
v “Rotating data in a partitioned table” on page 339
v “Adding data partitions to partitioned tables” on page 356
v “Altering partitioned tables” on page 336
v “Altering a table” on page 297
v “Altering materialized query table properties” on page 335
Related reference:
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
v “SET INTEGRITY statement” in SQL Reference, Volume 2
v “ALTER TABLE statement” in SQL Reference, Volume 2
Prerequisites:
To create a table, the privileges held by the authorization ID of the statement must
include at least one of the following authorities and privileges:
v CREATETAB authority on the database and USE privilege on the table space, as
well as one of:
– IMPLICIT_SCHEMA authority on the database, if the implicit or explicit
schema name of the table does not exist
– CREATEIN privilege on the schema, if the schema name of the table refers to
an existing schema
v SYSADM or DBADM authority
Procedure:
If this command fails because the original data is incompatible with the
definition of table sourceC, you must transform the data in the original table as
it is being transferred to sourceC.
4. After the data has been successfully copied to sourceC, submit the ALTER
TABLE target ...ATTACH sourceC statement.
Related concepts:
v “Resolving a mismatch when trying to attach a data partition to a partitioned
table” on page 348
v “Partitioned tables” in Administration Guide: Planning
v “Tables” in SQL Reference, Volume 1
v “Mimicking databases using db2look” in Troubleshooting Guide
Related tasks:
v “Altering a table” on page 297
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “db2look - DB2 statistics and DDL extraction tool command” in Command
Reference
Materialized query tables are a powerful way to improve response time for
complex queries, especially queries that might require some of the following
operations:
v Aggregated data over one or more dimensions
v Joins and aggregates data over a group of tables
v Data from a commonly accessed subset of data
v Repartitioned data from a table, or part of a table, in a partitioned database
environment
Restrictions:
Procedure:
When a staging table is created, it is put in a pending state and has an indicator
that shows that the table is inconsistent or incomplete with regard to the content of
underlying tables and the associated materialized query table. The staging table
needs to be brought out of the pending and inconsistent state in order to start
collecting the changes from its underlying tables. While in a pending state, any
attempts to make modifications to any of the staging table’s underlying tables will
fail, as will any attempts to refresh the associated materialized query table.
There are several ways a staging table might be brought out of a pending state; for
example:
v SET INTEGRITY FOR <staging table name> STAGING IMMEDIATE
UNCHECKED
v SET INTEGRITY FOR <staging table name> IMMEDIATE CHECKED
Related tasks:
v “Altering materialized query table properties” on page 335
v “Creating a materialized query table” on page 201
v “Dropping a materialized query or staging table” on page 365
v “Refreshing the data in a materialized query table” on page 336
Related reference:
v “SET INTEGRITY statement” in SQL Reference, Volume 2
The description of this table does not appear in the system catalog making it not
persistent for, and not able to be shared with, other applications.
When the application using this table terminates or disconnects from the database,
any data in the table is deleted and the table is implicitly dropped.
Prerequisites:
A user temporary table space must exist before creating a user-defined temporary
table.
Restrictions:
This statement creates a user temporary table called gbl_temp. The user temporary
table is defined with columns that have exactly the same name and description as
the columns of the empltabl. The implicit definition only includes the column
name, data type, nullability characteristic, and column default value attributes. All
other column attributes including unique constraints, foreign key constraints,
triggers, and indexes are not defined. When a COMMIT operation is performed, all
data in the table is deleted if no WITH HOLD cursor is open on the table. Changes
made to the user temporary table are not logged. The user temporary table is
placed in the specified user temporary table space. This table space must exist or
the declaration of this table will fail.
Related tasks:
v “Creating a user temporary table space” on page 159
Related reference:
v “DECLARE GLOBAL TEMPORARY TABLE statement” in SQL Reference, Volume
2
v “ROLLBACK statement” in SQL Reference, Volume 2
v “SAVEPOINT statement” in SQL Reference, Volume 2
The first example shows a range-clustered table that is used to locate a student
using a STUDENT_ID. For each student record, the following information is
included:
v School ID
v Program ID
Chapter 4. Creating tables and other related table objects 213
v Student number
v Student ID
v Student first name
v Student last name
v Student grade point average (GPA)
In this case, the student records are based solely on the STUDENT_ID. The
STUDENT_ID will be used to add, update, and delete student records.
Note: Other indexes can be added separately at another time. However, for the
purpose of this example, the organization of the table and how to access the
table’s data are defined when the table is created.
The size of each record is the sum of the columns. In this case, there is a 10 byte
header + 4 + 4 + 4 + 4 + 30 + 30 + 8 + 3 (for nullable columns) equaling 97 bytes.
With a 4 KB page size (or 4096 bytes), after accounting for the overhead there is
4038 bytes, or enough room for 42 records per page. If 1 million student records
are allowed, there will be a need for 1 million divided by 42 records per page, or
23809.5 pages. This rounds up to 23810 pages that are needed. Four pages are
added for table overhead and three pages for extent mapping. The result is a
required preallocation of 23817 pages of 4 KB size. (The extent mapping assumes a
single container to hold this table. There should be three pages for each container.)
In the second example, which is a variation on the first, consider the idea of a
school board. In the school board there are 200 schools, each having 20 classrooms
with a capacity of 35 students. This school board can accommodate a maximum of
140,000 students.
In this case, the student records are based on three factors: the SCHOOL_ID, the
CLASS_ID, and the STUDENT_NUM values. Each of these three columns will
have unique values and will be used together to add, update, and delete student
records.
Note: As with the previous example, other indexes might be added separately and
at some other time.
In this case, an overflow is not allowed. This makes sense because there is likely a
school board policy that restricts the number of students allowed in each class. In
this example, the largest possible class size is 35. When you couple this factor with
the physical limitations imposed by the number of classrooms and schools, it is
clear that there is no reason to allow an overflow in the number of students in the
school board.
It is possible that schools have varying numbers of classrooms. If this is the case,
when defining the range for the number of classrooms (using CLASS_ID), the
upper boundary should be the largest number of classrooms when considering all
of the schools. This might mean that some smaller schools (schools with fewer
classrooms than the largest school) will have space for student records that might
never be used (unless, for example, portable classrooms are added to the school).
By using the same 4 KB page size and the same student record size as in the
previous example, there can be 42 records per page. With 140,000 student records,
there will be a need for 3333.3 pages, or 3334 pages once rounding up is done.
There are two pages for table information, and three pages for extent mapping.
The result is a required preallocation of 3339 pages of 4 KB size.
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
When working to determine the best access path to required data, the SQL
compiler uses statistical information kept about the tables. Index statistics are
collected during a table scan when a RUNSTATS command is issued. For an RCT,
the table is modeled as a regular table, and the index is modeled as a
function-based index.
Related concepts:
v “Guidelines for using range-clustered tables” on page 216
v “Range-clustered tables” in Administration Guide: Planning
Related concepts:
v “Range-clustered tables” in Administration Guide: Planning
v “Examples of range-clustered tables” on page 213
As part of creating a structured type hierarchy, you will create typed tables. You
can use typed tables to store instances of objects whose characteristics are defined
with the CREATE TYPE statement.
Prerequisites:
The type on which the hierarchy table or typed table will be created must exist.
Restrictions:
Partitioned hierarchy tables and partitioned typed tables are not supported.
(Partitioned tables are tables where data is partitioned into multiple storage objects
based on the specifications provided in the PARTITION BY clause of the CREATE
TABLE statement.)
Procedure:
You can create a hierarchy table or typed table using a variant of the CREATE
TABLE statement.
Related concepts:
v “Typed tables” in Developing SQL and External Routines
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “CREATE TYPE (Structured) statement” in SQL Reference, Volume 2
Prerequisites:
Procedure:
You can populate a typed table after creating the structured types and then
creating the corresponding tables and subtables.
Related concepts:
v “Substitutability in typed tables” in Developing SQL and External Routines
v “Typed tables” in Developing SQL and External Routines
Related tasks:
v “Creating a hierarchy table or a typed table” on page 216
v “Creating typed tables” in Developing SQL and External Routines
v “Dropping typed tables” in Developing SQL and External Routines
v “Storing objects in typed table rows” in Developing SQL and External Routines
Related reference:
v “CREATE TYPE (Structured) statement” in SQL Reference, Volume 2
Prerequisites:
You must take the time to design and organize the tables that will hold your data.
Procedure:
1. Expand the object tree until you see the Tables folder.
2. Right-click the Tables folder, and click Create.
3. Follow the steps in the wizard to complete your tasks.
When creating a table, you can choose to have the columns of the table based on
the attributes of a structured type. Such a table is called a “typed table”.
A typed table can be defined to inherit some of its columns from another typed
table. Such a table is called a “subtable”, and the table from which it inherits is
called its “supertable”. The combination of a typed table and all its subtables is
called a “table hierarchy”. The topmost table in the table hierarchy (the one with
no supertable) is called the “root table” of the hierarchy.
You can also create a table that is defined based on the result of a query. This type
of table is called a materialized query table.
Refer to the topics in the related information sections for other options that you
should consider when creating and populating a table.
Related concepts:
v “Import Overview” in Data Movement Utilities Guide and Reference
v “Load overview” in Data Movement Utilities Guide and Reference
v “Moving data across platforms - file format considerations” in Data Movement
Utilities Guide and Reference
v “Comparing IDENTITY columns and sequences” on page 235
v “Large object (LOB) column considerations” on page 221
v “Table creation” on page 189
Related tasks:
v “Creating a hierarchy table or a typed table” on page 216
v “Creating a materialized query table” on page 201
v “Creating a sequence” on page 234
v “Creating a table in a partitioned database environment” on page 191
v “Creating a table in multiple table spaces” on page 190
v “Creating a user-defined temporary table” on page 212
v “Defining a generated column on a new table” on page 219
v “Defining an identity column on a new table” on page 220
v “Defining dimensions on a table” on page 235
v “Defining a unique constraint on a table” on page 223
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “DECLARE GLOBAL TEMPORARY TABLE statement” in SQL Reference, Volume
2
v “INSERT statement” in SQL Reference, Volume 2
v “IMPORT Command” in Command Reference
v “LOAD command” in Command Reference
Defining columns
This section discusses how to define generated and identify columns on a new
table.
Procedure:
When creating a table where it is known that certain expressions or predicates will
be used all the time, you can add one or more generated columns to that table. By
using a generated column there is opportunity for performance improvements
when querying the table data.
For example, there are two ways in which the evaluation of expressions can be
costly when performance is important:
1. The evaluation of the expression must be done many times during a query.
2. The computation is complex.
To improve the performance of the query, you can define an additional column
that would contain the results of the expression. Then, when issuing a query that
Where queries involve the joining of data from two or more tables, the addition of
a generated column can allow the optimizer a choice of possibly better join
strategies.
After creating this table, indexes can be created using the generated columns. For
example,
CREATE INDEX i1 ON t1(c4)
can be written as
SELECT COUNT(*) FROM t1 WHERE c4 IS NOT NULL
Another example:
SELECT c1 + c2 FROM t1 WHERE (c1 + c2) * c1 > 100
can be written as
SELECT c3 FROM t1 WHERE c3 * c1 > 100
Related tasks:
v “Defining a generated column on an existing table” on page 321
Related reference:
v “CREATE INDEX statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “SELECT statement” in SQL Reference, Volume 2
v “Restrictions on native XML data store” in XML Guide
Restrictions:
Once created, you cannot alter the table description to include an identity column.
If rows are inserted into a table with explicit identity column values specified, the
next internally generated value is not updated, and might conflict with existing
values in the table. Duplicate values will generate an error message if the
uniqueness of the values in the identity column is being enforced by a primary-key
or a unique index that has been defined on the identity column.
Procedure:
To define an identity column on a new table, use the AS IDENTITY clause on the
CREATE TABLE statement.
In this example the third column is the identity column. You can also specify the
value used in the column to uniquely identify each row when added. Here the first
row entered has the value of “100” placed in the column; every subsequent row
added to the table has the associated value increased by five.
Related concepts:
v “Comparing IDENTITY columns and sequences” on page 235
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
Note: When moving LOBs, small LOBs are stored in the applications heap and
large LOBs are stored in temporary tables within a 4 KB page size
temporary table space.
On platforms where sparse file allocation is not supported and where LOBs are
placed in SMS table spaces, consider using the COMPACT clause. Sparse file
allocation has to do with how physical disk space is used by an operating
system. An operating system that supports sparse file allocation does not use as
much physical disk space to store LOBs as compared to an operating system
not supporting sparse file allocation. The COMPACT option allows for even
greater physical disk space “savings” regardless of the support of sparse file
allocation. Because you can get some physical disk space savings when using
COMPACT, you should consider using COMPACT if your operating system
does not support sparse file allocation.
Note: DB2 Version 8 and later: System catalogs that use LOB columns and
might take up more space than in previous versions.
3. Do you want better performance for LOB columns, including those LOB
columns in the system catalogs?
Related concepts:
v “Space requirements for large object data” in Administration Guide: Planning
Related reference:
v “Large objects (LOBs)” in SQL Reference, Volume 1
v “CREATE TABLE statement” in SQL Reference, Volume 2
Restrictions:
Procedure:
You define a unique constraint with the UNIQUE clause in the CREATE TABLE or
ALTER TABLE statements. The unique key can consist of more than one column.
More than one unique constraint is allowed on a table.
You can take any one unique constraint and use it as the primary key. The primary
key can be used as the parent key in a referential constraint (along with other
unique constraints). You define a primary key with the PRIMARY KEY clause in
the CREATE TABLE or ALTER TABLE statement. The primary key can consist of
more than one column.
A primary index forces the value of the primary key to be unique. When a table is
created with a primary key, the database manager creates a primary index on that
key.
Related concepts:
v “Constraints” in SQL Reference, Volume 1
v “Keys” in SQL Reference, Volume 1
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
Procedure:
Referential constraints are established with the FOREIGN KEY clause, and the
REFERENCES clause in the CREATE TABLE or ALTER TABLE statements. There
are effects from a referential constraint on a typed table or to a parent table that is
a typed table that you should consider before creating a referential constraint.
The identification of foreign keys enforces constraints on the values within the
rows of a table or between the rows of two tables. The database manager checks
the constraints specified in a table definition and maintains the relationships
accordingly. The goal is to maintain integrity whenever one database object
references another.
For example, primary and foreign keys each have a department number column.
For the EMPLOYEE table, the column name is WORKDEPT, and for the
DEPARTMENT table, the name is DEPTNO. The relationship between these two
tables is defined by the following constraints:
v There is only one department number for each employee in the EMPLOYEE
table, and that number exists in the DEPARTMENT table.
v Each row in the EMPLOYEE table is related to no more than one row in the
DEPARTMENT table. There is a unique relationship between the tables.
By specifying the DEPTNO column as the primary key of the DEPARTMENT table
and WORKDEPT as the foreign key of the EMPLOYEE table, you are defining a
referential constraint on the WORKDEPT values. This constraint enforces
referential integrity between the values of the two tables. In this case, any
employees that are added to the EMPLOYEE table must have a department
number that can be found in the DEPARTMENT table.
The delete rule for the referential constraint in the employee table is NO ACTION,
which means that a department cannot be deleted from the DEPARTMENT table if
there are any employees in that department.
Although the previous examples use the CREATE TABLE statement to add a
referential constraint, the ALTER TABLE statement can also be used.
Another example: The same table definitions are used as those in the previous
example. Also, the DEPARTMENT table is created before the EMPLOYEE table.
Each department has a manager, and that manager is listed in the EMPLOYEE
table. MGRNO of the DEPARTMENT table is actually a foreign key of the
EMPLOYEE table. Because of this referential cycle, this constraint poses a slight
problem. You could add a foreign key later. You could also use the CREATE
SCHEMA statement to create both the EMPLOYEE and DEPARTMENT tables at
the same time.
Related concepts:
v “Foreign keys in a referential constraint” on page 226
v “REFERENCES clause in a referential constraint” on page 227
Related tasks:
v “Adding foreign keys” on page 310
The number of columns in the foreign key must be equal to the number of
columns in the corresponding primary or unique constraint (called a parent key) of
the parent table. In addition, corresponding parts of the key column definitions
must have the same data types and lengths. The foreign key can be assigned a
constraint name. If you do not assign a name, one is automatically assigned. For
ease of use, it is recommended that you assign a constraint name and do not use the
system-generated name.
The value of a composite foreign key matches the value of a parent key if the
value of each column of the foreign key is equal to the value of the corresponding
column of the parent key. A foreign key containing null values cannot match the
values of a parent key, since a parent key by definition can have no null values.
However, a null foreign key value is always valid, regardless of the value of any of
its non-null parts.
Related tasks:
v “Defining a unique constraint on a table” on page 223
v “Defining referential constraints on tables” on page 224
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
Included in the REFERENCES clause is the delete rule. In this example, the ON
DELETE NO ACTION rule is used, which states that no department can be deleted
if there are employees assigned to it. Other delete rules include ON DELETE
CASCADE, ON DELETE SET NULL, and ON DELETE RESTRICT.
Related concepts:
v “Foreign keys in a referential constraint” on page 226
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
Related concepts:
v “Checking for integrity violations following a load operation” in Data Movement
Utilities Guide and Reference
v “Import Overview” in Data Movement Utilities Guide and Reference
v “Load overview” in Data Movement Utilities Guide and Reference
Procedure:
A constraint name cannot be the same as any other constraint specified within the
same CREATE TABLE statement. If you do not specify a constraint name, the
system generates an 18-character unique identifier for the constraint.
A table check constraint is used to enforce data integrity rules not covered by key
uniqueness or a referential integrity constraint. In some cases, a table check
constraint can be used to implement domain checking. The following constraint
issued on the CREATE TABLE statement ensures that the start date for every
activity is not after the end date for the same activity:
CREATE TABLE EMP_ACT
(EMPNO CHAR(6) NOT NULL,
PROJNO CHAR(6) NOT NULL,
ACTNO SMALLINT NOT NULL,
EMPTIME DECIMAL(5,2),
EMSTDATE DATE,
EMENDATE DATE,
CONSTRAINT ACTDATES CHECK(EMSTDATE <= EMENDATE) )
IN RESOURCE
Although the previous example uses the CREATE TABLE statement to add a table
check constraint, the ALTER TABLE statement can also be used.
Related concepts:
v “Constraints” in SQL Reference, Volume 1
Related tasks:
v “Adding a table check constraint” on page 314
v “Checking for constraint violations using SET INTEGRITY” on page 230
v “Making a table in no data movement mode fully accessible” on page 238
Related reference:
v “ALTER SERVER statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
During table alteration, distribution keys can only be defined if the table
resides in a single-partition database partition group.
Procedure:
1. Open the Alter Table notebook: From the Control Center, expand the object tree until
you find the Tables folder. Click the Tables folder. Any existing tables are displayed in
the pane on the right side of the window. Right-click the table you want and select
Alter from the pop-up menu. The Alter Table notebook opens.
2. On the Keys page, click Add Partitioning. The Define Partitioning Key window opens.
3. Select the columns that you want to add as distribution key columns and move them to
the Selected columns box.
Related concepts:
v “Table partitioning keys” in Administration Guide: Planning
Prerequisites:
To add check constraints, you must have at least one of the following privileges on
the table to be altered:
v ALTER privilege
v CONTROL privilege
v SYSADM or DBADM authority
v ALTERIN privilege on the schema of the table
Procedure:
1. Open the Alter Table notebook if you are adding a unique key to a table: From the
Control Center, expand the object tree until you find the Tables folder. Click the Tables
folder. Any existing tables are displayed in the pane on the right side of the window.
Right-click the table you want in the contents pane and select Alter from the pop-up
menu. The Alter Table notebook opens.
If you are adding check constraints on a nickname, open the Alter Nickname notebook.
2. On the Check Constraints page, click Add. The Add Check Constraint window opens.
3. For as many check constraints as you are adding: Specify the check condition for the
constraint that you are defining, type a name for the check constraint, and optionally
type a comment to document the new check constraint.
To add a check constraint using the command line, use the ALTER TABLE
statement.
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
The load operation causes a table to be put into Set Integrity Pending state
automatically if the table has constraints defined on it or if it has dependent
foreign key tables, dependent materialized query tables, or dependent staging
tables. When the load operation is completed, you can verify the integrity of the
loaded data and you can turn on constraint checking for the table. If the table has
dependent foreign key tables, dependent materialized query tables, or dependent
staging tables, they will be automatically put into Set Integrity Pending state. You
will need to use the Set Integrity window to perform separate integrity processing
on each of these tables.
Prerequisites:
v To turn on constraint checking for a table and performing integrity processing
on the table, you need one of the following:
– SYSADM or DBADM authority
– CONTROL privileges on the tables being checked, and if exceptions are being
posted to one or more tables, INSERT privilege on the exception tables
Procedure:
1. Open the Set Integrity window: From the Control Center, expand the object tree until
you find the Tables folder. Click on the Tables folder. Any existing tables are displayed
in the pane on the right side of the window. Right-click the table you want and select
Set Integrity from the pop-up menu. The Set Integrity window opens.
2. Review the Current Integrity Status of the table you are working with.
3. To turn on constraint checking for a table and not check the table data:
a. Select the Immediate and unchecked radio button.
b. Specify the type of integrity processing that you are turning on.
c. Select the Full Access radio button to immediately perform data movement
operations against the table (such as reorganize or redistribute). However, note that
subsequent refreshes of dependent materialized query tables will take longer. If the
table has an associated materialized query table, it is recommended that you do not
select this radio button in order to reduce the time needed to refresh the
materialized query table.
4. To turn on constraint checking for a table and check the existing table data:
a. Select the Immediate and checked radio button.
b. Select which type of integrity processing that you want to perform. If the Current
integrity status shows that the constraints checked value for the materialized query
table is incomplete, you cannot incrementally refresh the materialized query table.
c. Optional: If you want identity or generated columns to be populated during
integrity processing, select the Force generated check box.
d. If the table is not a staging table, make sure that the Prune check box is unchecked.
e. Select the Full Access radio button to immediately perform data movement
operations against the table.
f. Optional: Specify an exception table. Any row that is in violation of a referential or
check constraint will be deleted from your table and copied to the exception table. If
you do not specify an exception table, when a constraint is violated, only the first
violation detected is returned to you and the table is left in the Set Integrity Pending
state.
5. To turn off constraint checking, immediate refreshing, or immediate propagation for a
table:
a. Select the Off radio button. The table will be put in Set Integrity Pending state.
b. Use the Cascade option to specify whether you want to cascade immediately or
defer cascading. If you are cascading immediately, use the Materialized Query
Tables, Foreign Key Tables, and Staging Tables check boxes to indicate the tables
to which you want to cascade.
Note: If you turn off constraint checking for a parent table and specify that you
want to cascade the changes to foreign key tables, the foreign key constraints of all
of its descendent foreign key tables are also turned off. If you turn off constraint
checking for a underlying table and specify that you want to cascade the check
pending state to materialized query tables, the refresh immediate properties of all its
dependent materialized query tables are also turned off. If you turn off constraint
checking for a underlying table and specify that you want to cascade the Set
Integrity Pending state to staging tables the propagate immediate properties of all
its dependent staging tables are also turned off.
To check for constraint violations using the command line, use the SET
INTEGRITY statement.
Troubleshooting tip:
Related tasks:
v “Adding check constraints” on page 229
v “Changing check constraints” on page 314
Related reference:
v “C samples” in Samples Topics
v “Command Line Processor (CLP) samples” in Samples Topics
v “JDBC samples” in Samples Topics
v “SET INTEGRITY statement” in SQL Reference, Volume 2
v “SQLJ samples” in Samples Topics
Procedure:
Related concepts:
v “Constraints” in SQL Reference, Volume 1
v “Query rewriting methods and examples” in Performance Guide
v “The SQL and XQuery compiler process” in Performance Guide
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
Creating a sequence
A sequence is a database object that allows the automatic generation of values.
Sequences are ideally suited to the task of generating unique key values.
Applications can use sequences to avoid possible concurrency and performance
problems resulting from the generation of a unique counter outside the database.
Restrictions:
Procedure:
In this example, the sequence is called order_seq. It will start at 1 and increase by
1 with no upper limit. There is no reason to cycle back to the beginning and restart
Related concepts:
v “Comparing IDENTITY columns and sequences” on page 235
v “Sequences” on page 461
Related reference:
v “CREATE SEQUENCE statement” in SQL Reference, Volume 2
While these are not all of the characteristics of these two items, these characteristics
will assist you in determining which to use depending on your database design
and the applications using the database.
Related tasks:
v “Creating a sequence” on page 234
v “Defining a generated column on a new table” on page 219
v “Defining a generated column on an existing table” on page 321
Restrictions:
The set of columns used in the ORGANIZE BY [DIMENSIONS] clause must follow
the rules for the CREATE INDEX statement. The columns are treated as keys used
to maintain the physical order of data in storage.
Procedure:
1. Open the Create Table wizard: From the Control Center, expand the object tree until
you see the Tables folder. Right-click the Tables folder and select Create from the
pop-up menu. The Create Table wizard opens.
2. On the Dimensions page, click Add. The Dimension window opens.
3. In the Available columns box, select the columns that you want in the column group
of the dimension and click the > push button to move the column or columns to the
Selected columns box.
4. Click Apply to add a dimension to the Dimensions list on the Dimension page.
To define dimensions using the command line, specify each dimension in the
CREATE TABLE statement using the ORGANIZE BY [DIMENSIONS] clause and
one or more columns. Parentheses are used within the dimension list to group
columns to be associated with a single dimension.
Although a table with a single clustering index can become unclustered over time
as space in the table is filled in, a table with multiple dimensions is able to
maintain its clustering over all dimensions automatically and continuously. As a
result, there is no need to reorganize the table in order to restore sequential order
to the data.
A dimension block index is automatically created for each dimension specified. The
dimension block index is used to access data along a dimension. The dimension
block index points to extents instead of individual rows, and so are much smaller
than regular indexes. These dimension block indexes can be used to very quickly
access only those extents of the table that contain particular dimension values.
Note: The order of key parts in the composite block index might affect its use or
applicability for query processing. The order of its key parts is determined
A composite block index is not created in the case where a specified dimension
already contains all the columns that the composite block index would have. For
example, a composite block index would not be created for the following table:
CREATE TABLE t1 (c1 int, c2 int)
ORGANIZE BY DIMENSIONS (c1,(c2,c1))
Related concepts:
v “Multidimensional clustering tables” in Administration Guide: Planning
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
If you want to use exception tables with the load, you must create the exception
tables before running the load task.
Note: If the table you are working with is a replication source, the changes made
will not be captured in replication.
Prerequisites:
To load data into a table, you must have one of the following authorities:
v SYSADM authority
v DBADM authority
v LOAD authority on the database and:
– INSERT privilege on the table if you load data in INSERT mode,
TERMINATE mode (to terminate a previous load operation), or RESTART
mode (to restart a previous load insert operation)
– INSERT and DELETE privilege on the table if you load data in REPLACE
mode, TERMINATE mode (to terminate a previous load replace operation), or
RESTART mode (to restart a previous load replace operation)
Note: Because all load processes (and all DB2 server processes in general) are
owned by the instance owner, and all these processes use the
identification of the instance owner to access the required files, the
instance owner must have read access to input data files. The input
data files must be readable by the instance owner, regardless of who
performs the load operation.
Procedure:
Related concepts:
v “LOAD authority” on page 511
v “Load considerations for MDC tables” in Administration Guide: Planning
Related tasks:
v “Enabling parallelism for loading data” on page 10
Related reference:
v “LOAD command” in Command Reference
Note: Use these steps only if the table you are working with is in no data
movement mode. If the table you are working with is in Set Integrity
Pending state, you must turn on constraint checking. See Checking for
constraint violations using SET INTEGRITY.
Prerequisites:
To bring a table from no data movement mode to full access mode, you need the
following authorities:
v SYSADM or DBADM authority
v CONTROL privilege on the tables that are moving from no data movement to
full access
Procedure:
To make a table in no data movement mode fully accessible using the Control
Center:
To make a table in no data movement mode fully accessible using the command
line, use the SET INTEGRITY statement.
Related concepts:
v “Constraints” in SQL Reference, Volume 1
Related tasks:
v “Adding a table check constraint” on page 314
v “Checking for constraint violations using SET INTEGRITY” on page 230
v “Defining a table check constraint” on page 228
Related reference:
v “SET INTEGRITY statement” in SQL Reference, Volume 2
Quiescing tables
You can change the quiesce mode of a table and its table spaces. When you quiesce
a table and its table spaces, locks are placed on the table and table spaces. The type
of lock depends on the quiesce mode.
Prerequisites:
To change the quiesce mode of a table, you must have one of the following
authorities: SYSADM, SYSCTRL, SYSMAINT, DBADM, or LOAD.
Procedure:
1. Open the Quiesce window: From the Control Center, expand the object tree until you
find the Tables folder. Click on the Tables folder. Any existing tables are displayed in
the pane on the right side of the window. Right-click the table you want and select
Quiesce from the pop-up menu. The Quiesce window opens.
2. If you are turning on the quiesce mode or updating the quiesce mode to a higher
mode:
a. Make sure that the Quiesce radio button is selected.
b. Select one of the following three modes:
Shared Puts the table in shared mode. In this mode, all users (yourself included)
can read but not change the table data.
Intent to update
Puts the table in update mode. In this mode, only you can update the table
data. Other users can read, but not update the data.
Exclusive
Puts the table in exclusive mode. In this mode, only you can read or
update the table data.
If the table is already in one quiesce mode, you can change it to a higher
(more exclusive) mode. For example, if the table is already in shared mode,
you can change it to intent to update, or to exclusive mode.
However, you cannot change a higher mode to a lower mode. Exclusive is
higher than intent to update, which is higher than shared.
3. If you are resetting a table’s quiesce mode, select the Quiesce reset radio button.
To change the quiesce mode for a table using the command line, use the QUIESCE
command.
Related reference:
v “QUIESCE command” in Command Reference
v “QUIESCE TABLESPACES FOR TABLE command” in Command Reference
Defining triggers
This section discusses trigger creation and dependencies.
Creating triggers
A trigger defines a set of actions that are executed in conjunction with, or triggered
by, an INSERT, UPDATE, or DELETE clause on a specified base table or a typed
table. Some uses of triggers are to:
v Validate input data
v Generate a value for a newly-inserted row
v Read from other tables for cross-referencing purposes
v Write to other tables for audit-trail purposes
You can use triggers to support general forms of integrity or business rules. For
example, a trigger can check a customer’s credit limit before an order is accepted
or update a summary data table.
Restrictions:
If the trigger is a BEFORE trigger, the column name specified by the triggered
action might not be a generated column other than an identity column. That is, the
generated identity value is visible to BEFORE triggers.
When creating an atomic trigger, care must be taken with the end-of-statement
character. The database manager, by default, considers a “;” the end-of-statement
marker. You should manually edit the end-of-statement character in your script to
create the atomic trigger so that you are using a character other than “;”. For
example, the “;” could be replaced by another special character like “#”.
Procedure:
1. Expand the object tree until you see the Triggers folder.
2. Right-click the Triggers folder, and select Create from the pop-up menu.
3. Specify information for the trigger.
4. Specify the action that you want the trigger to invoke, and click OK.
The following SQL statement creates a trigger that increases the number of
employees each time a new person is hired, by adding 1 to the number of
employees (NBEMP) column in the COMPANY_STATS table each time a row is
added to the EMPLOYEE table.
CREATE TRIGGER NEW_HIRED
AFTER INSERT ON EMPLOYEE
FOR EACH ROW
UPDATE COMPANY_STATS SET NBEMP = NBEMP+1;
Related concepts:
v “Trigger dependencies” on page 242
v “Updating view contents using triggers” on page 328
v “INSERT, UPDATE, and DELETE triggers” in Developing SQL and External
Routines
v “Trigger creation guidelines” in Developing SQL and External Routines
v “Triggers in application development” in Developing SQL and External Routines
Related tasks:
v “Dropping a trigger” on page 329
v “Defining actions using triggers” in Developing SQL and External Routines
v “Defining business rules using triggers” in Developing SQL and External Routines
Related reference:
v “CREATE TRIGGER statement” in SQL Reference, Volume 2
v “Restrictions on native XML data store” in XML Guide
Trigger dependencies
All dependencies of a trigger on some other object are recorded in the
SYSCAT.TRIGDEP catalog. A trigger can depend on many objects. These objects
and the dependent trigger are presented in detail in the DROP statement.
If one of these objects is dropped, the trigger becomes inoperative but its definition
is retained in the catalog. To revalidate this trigger, you must retrieve its definition
from the catalog and submit a new CREATE TRIGGER statement.
If the dependent object is a view and it is made inoperative, the trigger is also
marked inoperative. Any packages dependent on triggers that have been marked
inoperative are invalidated.
Related concepts:
v “Updating view contents using triggers” on page 328
Related tasks:
v “Creating triggers” on page 240
v “Dropping a trigger” on page 329
Related reference:
v “CREATE TRIGGER statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
Statistics about the performance of UDFs are important when compiling SQL
statements.
Related concepts:
v “General rules for updating catalog statistics manually” in Performance Guide
v “Statistics for user-defined functions” in Performance Guide
Related tasks:
v “Creating a function mapping in a federated database” on page 244
v “Creating a function template in a federated system” on page 245
Related reference:
v “Functions” in SQL Reference, Volume 1
v “CREATE FUNCTION statement” in SQL Reference, Volume 2
Functions (or function templates) must have the same number of input parameters
as the data source function. Additionally, the data types of the input parameters on
the federated side should be compatible with the data types of the input
parameters on the data source side. These requirements apply to returned values
as well.
Prerequisites:
You must hold one of the SYSADM or DBADM authorities at the federated
database to use this statement. Function mapping attributes are stored in
SYSCAT.FUNCMAPPINGS.
Restrictions:
The federated server will not bind input host variables or retrieve results of LOB,
LONG VARCHAR/VARGRAPHIC, DATALINK, distinct and structured types. No
function mapping can be created when an input parameter or the returned value
includes one of these types.
Procedure:
Related concepts:
v “Host language program mappings with transform functions” in Developing SQL
and External Routines
Related tasks:
v “Creating a function template in a federated system” on page 245
Related reference:
v “CREATE FUNCTION MAPPING statement” in SQL Reference, Volume 2
The template is just a function shell: name, input parameters, and the return value.
There is no local executable for the function.
Restrictions:
There is no local executable for the function, therefore it is possible that a call to
the function template will fail even though the function is available at the data
source. For example, consider the query:
SELECT myfunc(C1)
FROM nick1
WHERE C2 < ’A’
If DB2 and the data source containing the object referenced by nick1 do not have
the same collating sequence, the query will fail because the comparison must be
done at DB2 while the function is at the data source. If the collating sequences
were the same, the comparison operation could be done at the data source that has
the underlying function referenced by myfunc.
Functions (or function templates) must have the same number of input parameters
as the data source function. The data types of the input parameters on the
federated side should be compatible with the data types of the input parameters
on the data source side. These requirements apply to returned values as well.
Procedure:
You create function templates using the CREATE FUNCTION statement with the
AS TEMPLATE keyword. After the template is created, you map the template to
the data source using the CREATE FUNCTION MAPPING statement.
For example, to create a function template and a function mapping for function
MYS1FUNC on server S1:
Related tasks:
v “Creating a function mapping in a federated database” on page 244
Related reference:
v “CREATE FUNCTION (Sourced or Template) statement” in SQL Reference,
Volume 2
UDTs support strong typing, which means that even though they share the same
representation as other types, values of a given UDT are considered to be
compatible only with values of the same UDT or UDTs in the same type hierarchy.
The SYSCAT.DATATYPES catalog view allows you to see the UDTs that have been
defined for your database. This catalog view also shows you the data types
defined by the database manager when the database was created.
When a UDT is dropped, any functions that are dependent on it are also dropped.
Related concepts:
v “User-defined structured types” on page 248
Related tasks:
v “Creating a user-defined distinct type” on page 247
Related reference:
v “Data types” in SQL Reference, Volume 1
v “User-defined types” in SQL Reference, Volume 1
Restrictions:
Instances of the same distinct type can be compared to each other, if the WITH
COMPARISONS clause is specified on the CREATE DISTINCT TYPE statement (as
in the example). The WITH COMPARISONS clause cannot be specified if the
source data type is a large object, a DATALINK, LONG VARCHAR, or LONG
VARGRAPHIC type.
Procedure:
For example, the following SQL statement creates the distinct type t_educ as a
smallint:
CREATE DISTINCT TYPE T_EDUC AS SMALLINT WITH COMPARISONS
After you have created a distinct type, you can use it to define columns in a
CREATE TABLE statement:
CREATE TABLE EMPLOYEE
(EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3),
PHONENO CHAR(4),
PHOTO BLOB(10M) NOT NULL,
EDLEVEL T_EDUC)
IN RESOURCE
Creating the distinct type also generates support to cast between the distinct type
and the source type. Hence, a value of type T_EDUC can be cast to a SMALLINT
value and the SMALLINT value can be cast to a T_EDUC value.
Related concepts:
v “User-defined types (UDTs)” on page 246
Related reference:
v “CREATE DISTINCT TYPE statement” in SQL Reference, Volume 2
v “CREATE FUNCTION (Sourced or Template) statement” in SQL Reference,
Volume 2
v “Data types” in SQL Reference, Volume 1
Related concepts:
v “Structured type hierarchies” in Developing SQL and External Routines
v “User-defined structured types” in Developing SQL and External Routines
Related tasks:
v “Creating a structured type hierarchy” in Developing SQL and External Routines
v “Creating structured types” in Developing SQL and External Routines
Related reference:
v “User-defined types” in SQL Reference, Volume 1
v “CREATE TYPE (Structured) statement” in SQL Reference, Volume 2
Default data type mappings are provided for built-in data source types and built-in
DB2 types. New data type mappings (that you create) will be listed in the
SYSCAT.TYPEMAPPINGS view.
Restrictions:
Procedure:
You create type mappings with the CREATE TYPE MAPPING statement. You must
hold one of the SYSADM or DBADM authorities at the federated database to use
this statement.
Related reference:
v “CREATE TYPE MAPPING statement” in SQL Reference, Volume 2
v “Data type mappings between DB2 and OLE DB” in Developing ADO.NET and
OLE DB Applications
Related tasks:
v “Creating a user-defined distinct type” on page 247
Related reference:
v “Length limits for source data types” on page 250
Related tasks:
v “Creating a user-defined distinct type” on page 247
Related reference:
v “CREATE DISTINCT TYPE statement” in SQL Reference, Volume 2
v “Source data types” on page 249
Creating a view
Views are derived from one or more base tables, nicknames, or views, and can be
used interchangeably with base tables when retrieving data. When changes are
made to the data shown in a view, the data is changed in the table itself.
A view can be created to limit access to sensitive data, while allowing more
general access to other data.
Chapter 4. Creating tables and other related table objects 251
When inserting into a view where the SELECT-list of the view definition directly
or indirectly includes the name of an identity column of a base table, the same
rules apply as if the INSERT statement directly referenced the identity column of
the base table.
In addition to using views as described above, a view can also be used to:
v Alter a table without affecting application programs. This can happen by
creating a view based on an underlying table. Applications that use the
underlying table are not affected by the creation of the new view. New
applications can use the created view for different purposes than those
applications that use the underlying table.
v Sum the values in a column, select the maximum values, or average the values.
v Provide access to information in one or more data sources. You can reference
nicknames within the CREATE VIEW statement and create multi-location/global
views (the view could join information in multiple data sources located on
different systems).
When you create a view that references nicknames using standard CREATE
VIEW syntax, you will see a warning alerting you to the fact that the
authentication ID of view users will be used to access the underlying object or
objects at data sources instead of the view creator authentication ID. Use the
FEDERATED keyword to suppress this warning.
A typed view is based on a predefined structured type. You can create a typed
view using the CREATE VIEW statement.
Prerequisites:
The base table, nickname, or view on which the view is to be based must already
exist before the view can be created.
Restrictions:
You can create a view that uses a UDF in its definition. However, to update this
view so that it contains the latest functions, you must drop it and then re-create it.
If a view is dependent on a UDF, that function cannot be dropped.
The following SQL statement creates a view with a function in its definition:
CREATE VIEW EMPLOYEE_PENSION (NAME, PENSION)
AS SELECT NAME, PENSION(HIREDATE,BIRTHDATE,SALARY,BONUS)
FROM EMPLOYEE
The UDF function PENSION calculates the current pension an employee is eligible
to receive, based on a formula involving their HIREDATE, BIRTHDATE, SALARY,
and BONUS.
Procedure:
1. Expand the object tree until you see the Views folder.
2. Right-click the Views folder, and select Create from the pop-up menu.
3. Complete the information, and click OK.
For example, the EMPLOYEE table might have salary information in it, which
should not be made available to everyone. The employee’s phone number,
however, should be generally accessible. In this case, a view could be created from
the LASTNAME and PHONENO columns only. Access to the view could be
granted to PUBLIC, while access to the entire EMPLOYEE table could be restricted
to those who have the authorization to see salary information.
With a view, you can make a subset of table data available to an application
program and validate data that is to be inserted or updated. A view can have
column names that are different from the names of corresponding columns in the
original tables.
The use of views provides flexibility in the way your programs and end-user
queries can look at the table data.
The following SQL statement creates a view on the EMPLOYEE table that lists all
employees in Department A00 with their employee and telephone numbers:
CREATE VIEW EMP_VIEW (DA00NAME, DA00NUM, PHONENO)
AS SELECT LASTNAME, EMPNO, PHONENO FROM EMPLOYEE
WHERE WORKDEPT = ’A00’
WITH CHECK OPTION
The first line of this statement names the view and defines its columns. The name
EMP_VIEW must be unique within its schema in SYSCAT.TABLES. The view name
appears as a table name although it contains no data. The view will have three
columns called DA00NAME, DA00NUM, and PHONENO, which correspond to
the columns LASTNAME, EMPNO, and PHONENO from the EMPLOYEE table.
The column names listed apply one-to-one to the select list of the SELECT
statement. If column names are not specified, the view uses the same names as the
columns of the result table of the SELECT statement.
The second line is a SELECT statement that describes which values are to be
selected from the database. It might include the clauses ALL, DISTINCT, FROM,
WHERE, GROUP BY, and HAVING. The name or names of the data objects from
which to select columns for the view must follow the FROM clause.
The WITH CHECK OPTION clause indicates that any updated or inserted row to
the view must be checked against the view definition, and rejected if it does not
conform. This enhances data integrity but requires additional processing. If this
clause is omitted, inserts and updates are not checked against the view definition.
The following SQL statement creates the same view on the EMPLOYEE table using
the SELECT AS clause:
Related concepts:
v “Views” in SQL Reference, Volume 1
v “Controlling access to data with views” on page 525
v “Table and view privileges” on page 515
v “Updating view contents using triggers” on page 328
Related tasks:
v “Altering or dropping a view” on page 330
v “Recovering inoperative views” on page 331
v “Removing rows from a table or view” on page 307
v “Creating typed views” in Developing SQL and External Routines
Related reference:
v “CREATE VIEW statement” in SQL Reference, Volume 2
v “INSERT statement” in SQL Reference, Volume 2
Creating an alias
An alias is an indirect method of referencing a table, nickname, or view, so that an
SQL or XQuery statement can be independent of the qualified name of that table
or view. Only the alias definition must be changed if the table or view name
changes. An alias can be created on another alias. An alias can be used in a view
or trigger definition and in any SQL or XQuery statement, except for table
check-constraint definitions, in which an existing table or view name can be
referenced.
Prerequisites:
An alias can be defined for a table, view, or alias that does not exist at the time of
definition. However, it must exist when the SQL or XQuery statement containing
the alias is compiled.
Restrictions:
An alias name can be used wherever an existing table name can be used, and can
refer to another alias if no circular or repetitive references are made along the
chain of aliases.
The alias name cannot be the same as an existing table, view, or alias, and can only
refer to a table within the same database. The name of a table or view used in a
CREATE TABLE or CREATE VIEW statement cannot be the same as an alias name
in the same schema.
You do not require special authority to create an alias, unless the alias is in a
schema other than the one owned by your current authorization ID, in which case
DBADM authority is required.
Procedure:
1. Expand the object tree until you see the Aliases folder.
2. Right-click the Aliases folder, and select Create from the pop-up menu.
3. Complete the information, and click Ok.
The alias is replaced at statement compilation time by the table or view name. If
the alias or alias chain cannot be resolved to a table or view name, an error results.
For example, if WORKERS is an alias for EMPLOYEE, then at compilation time:
SELECT * FROM WORKERS
becomes in effect
SELECT * FROM EMPLOYEE
The following SQL statement creates an alias WORKERS for the EMPLOYEE table:
CREATE ALIAS WORKERS FOR EMPLOYEE
Note: DB2 for OS/390 or z/Series employs two distinct concepts of aliases: ALIAS
and SYNONYM. These two concepts differ from DB2 database as follows:
v ALIASes in DB2 for OS/390 or z/Series:
– Require their creator to have special authority or privilege
– Cannot reference other aliases.
v SYNONYMs in DB2 for OS/390 or z/Series:
– Can only be used by their creator
– Are always unqualified
– Are dropped when a referenced table is dropped
– Do not share namespace with tables or views.
Related concepts:
v “Aliases” in SQL Reference, Volume 1
Related reference:
v “CREATE ALIAS statement” in SQL Reference, Volume 2
Creating indexes
You can work with the indexes maintained by the database manager, or you can
specify your own index.
Procedure:
Performance Tip: If you are going to carry out the following series of tasks:
1. Create Table
2. Load Table
3. Create Index (without the COLLECT STATISTICS option)
4. Perform RUNSTATS
Or, if you are going to carry out the following series of tasks:
1. Create Table
2. Load Table
3. Create Index (with the COLLECT STATISTICS option)
then you should consider ordering the execution of tasks in the following way:
1. Create the table
2. Create the index
3. Load the table with the statistics yes option requested.
Indexes are maintained after they are created. Subsequently, when application
programs use a key value to randomly access and process rows in a table, the
index based on that key value can be used to access rows directly. This is
important, because the physical storage of rows in a base table is not ordered.
When a row is inserted, unless there is a clustering index defined, the row is
placed in the most convenient storage location that can accommodate it. When
searching for rows of a table that meet a particular selection condition and the
table has no indexes, the entire table is scanned. An index optimizes data retrieval
without performing a lengthy sequential search.
The data for your indexes can be stored in the same table space as your table data,
or in a separate table space containing index data. The table space used to store the
index data is determined when the table is created, or for partitioned tables, the
index location can be overridden using the IN clause of the CREATE INDEX
statement. This allows different table spaces to be specified for different indexes, as
required.
1. Expand the object tree until you see the Indexes folder.
2. Right-click the Indexes folder, and select Create —> Index Using Wizard from the
pop-up menu.
3. Follow the steps in the wizard to complete your task.
Related concepts:
v “Optimizing load performance” in Data Movement Utilities Guide and Reference
v “Understanding index behavior on partitioned tables” in Performance Guide
v “Index cleanup and maintenance” in Performance Guide
v “Relational index performance tips” in Performance Guide
v “Relational index planning tips” in Performance Guide
v “Index privileges” on page 518
v “Options on the CREATE INDEX statement” on page 261
v “Using an index” on page 260
Related tasks:
v “Dropping an index, index extension, or an index specification” on page 327
v “Renaming an existing table or index” on page 326
Related reference:
v “CREATE INDEX statement” in SQL Reference, Volume 2
An index extension is an index object for use with indexes that have structured
type or distinct type columns.
The DB2 Index Advisor is a wizard that assists you in choosing an optimal set of
indexes. You can access this wizard through the Control Center. The comparable
utility is called db2advis.
An index is defined by columns in the base table. It can be defined by the creator
of a table, or by a user who knows that certain columns require direct access. A
primary index key is automatically created on the primary key, unless a
user-defined index already exists.
Any number of indexes can be defined on a particular base table, and they can
have a beneficial effect on the performance of queries. However, the more indexes
there are, the more the database manager must modify statistics during update,
delete, and insert operations. Creating a large number of indexes for a table that
receives many updates can slow down processing of requests. Similarly, large
index keys can also slow down processing of requests. Therefore, use indexes only
where a clear advantage for frequent access exists.
The maximum number of columns in an index is 64. If you are indexing a typed
table, the maximum number of columns is 63. The maximum length of an index
If the table being indexed is empty, an index is still created, but no index entries
are made until the table is loaded or rows are inserted. If the table is not empty,
the database manager makes the index entries while processing the CREATE
INDEX statement.
For a clustering index, new rows are inserted physically close to existing rows with
similar key values. This yields a performance benefit during queries because it
results in a more linear access pattern to data pages and more effective
pre-fetching.
If you want a primary key index to be a clustering index, a primary key should
not be specified at CREATE TABLE. Once a primary key is created, the associated
index cannot be modified. Instead, perform a CREATE TABLE without a primary
key clause. Then issue a CREATE INDEX statement, specifying clustering
attributes. Finally, use the ALTER TABLE statement to add a primary key that
corresponds to the index just created. This index will be used as the primary key
index.
Column data which is not part of the unique index key but which is to be
stored/maintained in the index is called an include column. Include columns can be
specified for unique indexes only. When creating an index with include columns,
only the unique key columns are sorted and considered for uniqueness. Use of
include columns improves the performance of data retrieval when index access is
involved.
The database manager uses a B+ tree structure for storing indexes where the
bottom level consists of leaf nodes. The leaf nodes or pages are where the actual
index key values are stored. When creating an index, you can enable those index
leaf pages to be merged online. Online index defragmentation is used to prevent
the situation where, after much delete and update activity, many leaf pages of an
index have only a few index keys left on them. In such a situation, and without
online index defragmentation, space could only be reclaimed by a reorganization of
the data with or without including the index. When deciding whether to create an
index with the ability to defragment index pages online, you should consider this
question: Is the added performance cost of checking for space to merge each time a
key is physically removed from a leaf page and the actual cost to complete the
merge, if there is enough space, greater than the benefit of better space utilization
for the index and less than a reduced need to perform a reorganization to reclaim
space?
Indexes for tables in a partitioned database environment are built using the same
CREATE INDEX statement. Data in the indexes is distributed based on the
distribution key of the table. When this is done, a B+ tree is created on each
database partition in the database partition group. Each B+ tree indexes the part of
the table belonging to that database partition. Columns in a unique index defined
on a multi-partition database must be a superset of the columns in the distribution
key.
Related concepts:
v “User-defined extended index types” on page 265
v “Index privileges” on page 518
v “Using an index” on page 260
v “Options on the CREATE INDEX statement” on page 261
v “Indexes” in SQL Reference, Volume 1
Related tasks:
v “Enabling parallelism when creating indexes” on page 10
v “Creating an index” on page 256
v “Dropping an index, index extension, or an index specification” on page 327
v “Renaming an existing table or index” on page 326
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “SQL and XQuery limits” in SQL Reference, Volume 1
v “CREATE INDEX EXTENSION statement” in SQL Reference, Volume 2
v “CREATE INDEX statement” in SQL Reference, Volume 2
Using an index
An index is never directly used by an application program. The decision on
whether to use an index and which of the potentially available indexes to use is
the responsibility of the optimizer.
Related concepts:
v “Data access through index scans” in Performance Guide
v “Relational index planning tips” in Performance Guide
v “Relational index performance tips” in Performance Guide
v “Table and index management for MDC tables” in Performance Guide
v “Table and index management for standard tables” in Performance Guide
The following SQL statement creates a non-unique index called LNAME from the
LASTNAME column on the EMPLOYEE table, sorted in ascending order:
CREATE INDEX LNAME ON EMPLOYEE (LASTNAME ASC)
The following SQL statement creates a unique index on the phone number column:
CREATE UNIQUE INDEX PH ON EMPLOYEE (PHONENO DESC)
A unique index ensures that no duplicate values exist in the indexed column or
columns. The constraint is enforced at the end of the SQL statement that updates
rows or inserts new rows. This type of index cannot be created if the set of one or
more columns already has duplicate values.
The keyword ASC puts the index entries in ascending order by column, while
DESC puts them in descending order by column. The default is ascending order.
You can create a unique index on two columns, one of which is an include column.
The primary key is defined on the column that is not the include column. Both of
them are shown in the catalog as primary keys on the same table. Normally there
is only one primary key per table.
The following SQL statement creates a clustering index called INDEX1 on the
LASTNAME column of the EMPLOYEE table:
CREATE INDEX INDEX1 ON EMPLOYEE (LASTNAME) CLUSTER
To use the internal storage of the database effectively, use clustering indexes with
the PCTFREE parameter associated with the ALTER TABLE statement so that new
data can be inserted on the correct pages. When data is inserted on the correct
pages, clustering order is maintained. Typically, the greater the INSERT activity on
the table, the larger the PCTFREE value (on the table) that will be needed in order
to maintain clustering. Since this index determines the order by which the data is
laid out on physical pages, only one clustering index can be defined for any
particular table.
If the index key values of these new rows are always new high key values for
example, then the clustering attribute of the table will try to place them at the end
of the table. Having free space in other pages will do little to preserve clustering.
In this case, placing the table in append mode might be a better choice than a
clustering index and altering the table to have a large PCTFREE value. You can
place the table in append mode by issuing: ALTER TABLE APPEND ON.
The above discussion also applies to new ″overflow″ rows that result from
UPDATEs that increase the size of a row.
A single index created using the ALLOW REVERSE SCANS parameter on the
CREATE INDEX statement can be scanned in a forward or a backward direction.
That is, such indexes support scans in the direction defined when the index was
created and scans in the opposite or reverse direction. The statement could look
something like:
CREATE INDEX iname ON tname (cname DESC) ALLOW REVERSE SCANS
In this case, the index (iname) is formed based on descending values (DESC) in the
given column (cname). By allowing reverse scans, although the index on the
column is defined for scans in descending order, a scan can be done in ascending
order (reverse order). The actual use of the index in both directions is not
controlled by you but by the optimizer when creating and considering access
plans.
The MINPCTUSED clause of the CREATE INDEX statement specifies the threshold
for the minimum amount of used space on an index leaf page. If this clause is
used, online index defragmentation is enabled for this index. Once enabled, the
following considerations are used to determine if an online index defragmentation
takes place: After a key is physically removed from a leaf page of this index and a
percentage of used space on the page is less than the specified threshold value, the
neighboring index leaf pages are checked to determine if the keys on the two leaf
pages can be merged into a single index leaf page.
For example, the following SQL statement creates an index with online index
defragmentation enabled:
When a key is physically removed from an index page of this index, if the
remaining keys on the index page take up twenty percent or less space on the
index page, then an attempt is made to delete an index page by merging the keys
of this index page with those of a neighboring index page. If the combined keys
can all fit on a single page, this merge is performed and one of the index pages is
deleted.
The CREATE INDEX statement allows you to create the index while, at the same
time, allowing read and write access to the underlying table and any previously
existing indexes. To restrict access to the table while creating the index, use the
LOCK TABLE statement to lock the table before creating the index. The new index
is created by scanning the underlying table. Any changes made to the table while
the index is being created are logged. Once the new index is created, the changes
are applied to the index. To apply the logged changes more quickly during the
index creation, a separate copy of the changes is maintained in memory buffer
space, which is allocated on demand from the utility heap. This allows the index
creation to process the changes by directly reading from memory first, and reading
through the logs, if necessary, at a much later time. Once all the changes have been
applied to the index, the table is quiesced while the new index is made visible.
When creating a unique index, ensure that there are no duplicate keys in the table
and that the concurrent inserts during index creation are not going to introduce
duplicate keys. Index creation uses a deferred unique scheme to detect duplicate
keys, and therefore no duplicate keys will be detected until the very end of index
creation, at which point the index creation will fail because of the duplicate keys.
The PCTFREE clause of the CREATE INDEX statement specifies the percentage of
each index page to leave as free space when the index is built. Leaving more free
space on the index pages will result in fewer page splits. This will reduce the need
to reorganize the table in order to regain sequential index pages which increases
prefetching. And prefetching is one important component that might improve
performance. Again, if there are always high key values, then you will want to
consider lowering the value of the PCTFREE clause of the CREATE INDEX
statement. In this way there will be limited wasted space reserved on each index
page.
The LEVEL2 PCTFREE clause directs the system to preserve a specified percentage
of free space on each page in the second level of an index. You specify a
percentage of free space when the index is created to accommodate future
insertions and updates. The second level is the level immediately above the leaf
level. The default is to preserve a minimum of 10 and the PCTFREE value in all
non-leaf pages. The LEVEL2 PCTFREE parameter allows the default to be
overwritten; if you use the LEVEL2 PCTFREE integer option in the CREATE
INDEX statement, the integer percent of free space is left on level 2 intermediate
pages. A minimum of 10 and the integer percent of free space is left on level 3 and
higher intermediate pages. By leaving more free space on the second level, the
number of page splits that occur at the second level of the index is reduced.
The PAGE SPLIT SYMMETRIC, PAGE SPLIT HIGH, and PAGE SPLIT LOW
clauses allow a choice in the page split behavior when inserting into an index.
The PAGE SPLIT SYMMETRIC clause is a default page split behavior that splits
roughly in the middle of an index page. Using this default behavior is best when
The PAGE SPLIT HIGH behavior is useful when there are ever increasing ranges
in the index. Increasing ranges in the index might occur when:
v There is an index with multiple key parts and there are many values (multiple
index pages worth) where all except the last key part have the same value
v All inserts into the table would consist of a new value which has the same value
as existing keys for all but the last key part
v The last key part of the inserted value is larger than that of the existing keys
then the next key to be inserted would have the value (x,y) where 1 <= x <= m
and y > n. If the insertions follow such a pattern, the PAGE SPLIT HIGH clause
can be used so that page splits do not result in many pages that are fifty percent
empty.
Similarly, PAGE SPLIT LOW can be used when there are ever-decreasing ranges in
the index, to avoid leaving pages 50 percent empty.
Note: If you want to add a primary or unique key, and you want the underlying
index to use SPLIT HIGH, SPLIT LOW, PCTFREE, LEVEL2 PCTFREE,
MINPCTUSED, CLUSTER, or ALLOW REVERSE SCANS you must first
create an index specifying the desired keys and parameters. Then use an
ALTER TABLE statement to add the primary or unique key. The ALTER
TABLE statement will pick up and reuse the index that you have already
created.
You can collect index statistics as part of the creation of the index. At the time
when you use the CREATE INDEX statement, the key value statistics and the
physical statistics are available for use. By collecting the index statistics as part of
the CREATE INDEX statement, you will not need to run the RUNSTATS utility
immediately following the completion of the CREATE INDEX statement.
For example, the following SQL statement will collect basic index statistics as part
of the creation of an index:
CREATE INDEX IDX1 ON TABL1 (COL1) COLLECT STATISTICS
If you have a replicated summary table, its base table (or tables) must have a
unique index, and the index key columns must be used in the query that defines
the replicated summary table.
Related concepts:
v “Index reorganization” in Performance Guide
v “Online index defragmentation” in Performance Guide
v “Relational index performance tips” in Performance Guide
v “Table and index management for MDC tables” in Performance Guide
v “Table and index management for standard tables” in Performance Guide
Related tasks:
v “Changing table attributes” on page 298
Related reference:
v “CREATE INDEX statement” in SQL Reference, Volume 2
v “dft_degree - Default degree configuration parameter” in Performance Guide
v “intra_parallel - Enable intra-partition parallelism configuration parameter” in
Performance Guide
v “max_querydegree - Maximum query degree of parallelism configuration
parameter” in Performance Guide
Note: The user-defined function definition must be deterministic and must not
allow external actions in order to be exploitable by the optimizer.
An optional data filter function can also be specified. The optimizer uses the filter
against the fetched tuple before the user-defined function is evaluated.
Only a structured type or distinct type column can use the index extension to
create a user-defined extended index type on these objects. The user-defined
extended index type must not:
v Be defined with clustering indexes
v Have INCLUDE columns.
Related concepts:
v “Defining an index extension - example” on page 268
Index maintenance
Index maintenance is the process of transforming the index column content (or
source key) to a target index key. The transformation process is defined using a
table function that has previously been defined in the database.
You define two of the components that make up the operations of an index
through the CREATE INDEX EXTENSION statement.
The FROM SOURCE KEY clause specifies a structured data type or distinct type
for the source key column supported by this index extension. A single parameter
name and data type are given and associated with the source key column.
The GENERATE KEY USING clause specifies the user-defined table function used
to generate the index key. The output from this function must be specified in the
TARGET KEY clause specification. The output from this function can also be used
as input for the index filtering function specified on the FILTER USING clause.
Related concepts:
v “User-defined extended index types” on page 265
Related reference:
v “CREATE INDEX EXTENSION statement” in SQL Reference, Volume 2
The WITH TARGET KEY clause of the CREATE INDEX EXTENSION statement
specifies the target key parameters that are the output of the user-defined table
function specified on the GENERATE KEY USING clause. A single parameter name
and data type are given and associated with the target key column. This parameter
corresponds to the columns of the RETURNS table of the user-defined table
function of the GENERATE KEY USING clause.
The SEARCH METHODS clause introduces one or more search methods defined
for the relational index. Each search method consists of a method name, search
arguments, a range producing function, and an optional index filter function. Each
search method defines how index search ranges for the underlying user-defined
index are produced by a user-defined table function. Further, each search method
defines how the index entries in a particular search range can be further qualified
by a user-defined scalar function to return a single value.
v The WHEN clause associates a label with a search method. The label is an SQL
identifier that relates to the method name specified in the relational index
exploitation rule (found in the PREDICATES clause of a user-defined function).
One or more parameter names and data types are given for use as arguments in
the range function with or without including the index filtering function. The
Related concepts:
v “User-defined extended index types” on page 265
v “Index exploitation” on page 267
v “Index maintenance” on page 266
Related reference:
v “CREATE INDEX EXTENSION statement” in SQL Reference, Volume 2
Index exploitation
Index exploitation occurs in the evaluation of the search method.
The PREDICATES clause identifies those predicates using this function that can
possibly exploit the index extensions (and that can possibly use the optional
SELECTIVITY clause for the predicate’s search condition). If the PREDICATES
clause is specified, the function must be defined as DETERMINISTIC with NO
EXTERNAL ACTION.
v The WHEN clause introduces a specific use of the function being defined in a
predicate with a comparison operator (=, >, <, and others) and a constant or
expression (using the EXPRESSION AS clause). When a predicate uses this
function with the same comparison operator and the given constant or
expression, filtering and index exploitation might be used. The use of a constant
is provided mainly to cover Boolean expressions where the result type is either a
1 or a 0. For all other cases, the EXPRESSION AS clause is the better choice.
v The FILTER USING clause identifies a filter function that can be used to perform
additional filtering of the result table. It is an alternative and faster version of
the defined function (used in the predicate) that reduces the number of rows on
which the user-defined predicate must be executed to determine if rows qualify.
Should the results produced by the index be close to the results expected by the
user-defined predicate, then the application of this filter function might be
redundant.
v You can optionally define a set of rules for each search method of an index
extension to exploit the index. You can also define a search method in the index
extension to describe the search targets, the search arguments, and how these
can be used to perform the index search.
– The SEARCH BY INDEX EXTENSION clause identifies the index extension.
Related concepts:
v “Defining an index extension - example” on page 268
v “User-defined extended index types” on page 265
v “Index maintenance” on page 266
v “Relational index searching” on page 266
Related reference:
v “CREATE FUNCTION (External Scalar) statement” in SQL Reference, Volume 2
Note: The FILTER USING clause could identify a case expression instead of an
index filtering function.
5. Define the predicates to exploit the index extension.
CREATE FUNCTION within (x shape, y shape)
RETURNS INTEGER
...
PREDICATES
WHEN = 1
FILTER USING mbrWithin (x..mbr..xmin, ...)
SEARCH BY INDEX EXTENSION grid_extension
WHEN KEY (parm_name) USE method_name(parm_name)
The PREDICATES clause introduces one or more predicates that are started
with each WHEN clause. The WHEN clause begins the specification for the
predicate with a comparison operator followed by either a constant or an
EXPRESSION AS clause. The FILTER USING clause identifies a filter function
that can be used to perform additional filtering of the result table. This is a
cheaper version of the defined function (used in the predicate) that reduces the
number of rows on which the user-defined predicate must be executed to
determine the rows that qualify. The SEARCH BY INDEX EXTENSION clause
specifies where the index exploitation takes place. Index exploitation defines
the set of rules using the search method of an index extension that can be used
to exploit the index. The WHEN KEY clause specifies the exploitation rule. The
exploitation rule describes the search targets and search arguments as well as
how they can be used to perform the index search through a search method.
6. Define a filter function.
CREATE FUNCTION mbrWithin (...)
The function defined here is created for use in the predicate of the index
extension.
In order for the query optimizer to successfully exploit indexes created to improve
query performance, a SELECTIVITY option is available on function invocation. In
cases where you have some idea of the percentage of rows that the predicate might
In the following example, the within user-defined function computes the center
and radius (based on the first and second parameters, respectively), and builds a
statement string with an appropriate selectivity:
SELECT * FROM customer
WHERE within(loc, circle(100, 100, 10)) = 1 SELECTIVITY .05
In this example, the indicated predicate (SELECTIVITY .05) filters out 95 percent of
the rows in the customer table.
Related concepts:
v “User-defined extended index types” on page 265
v “Index exploitation” on page 267
v “Index maintenance” on page 266
v “Relational index searching” on page 266
Related reference:
v “CREATE FUNCTION (External Scalar) statement” in SQL Reference, Volume 2
v “CREATE INDEX EXTENSION statement” in SQL Reference, Volume 2
Only first-level dependencies are shown in the table in the Show Related notebook.
For example, if you select a table as the target object and the table has a
dependency on a view which also has a dependency on another view, only the
table dependency is shown. To see the dependency on the view, right-click the
view-related object and click Show Related in the pop-up menu.
If you want to compare the relationships between several target objects and their
related objects, you can open several Show Related notebooks from the Control
Center object tree.
To see the SQL query that defines the relationships between objects, click Show
SQL.
From each page of the Show Related notebook, right click the object for which you
want to view related objects, and click Show Related in the pop-up menu. The
Target object changes to the object you just selected. The Show Related notebook
page changes to show the objects related to your latest selection. Your previous
target object is added to the list.
Note: You can perform other actions in the Show Related notebook by
right-clicking the object in the Show Related notebook and clicking an action
in the pop-up menu.
Related concepts:
v “Control Center overview” on page 376
Validation testing checks that all relationships for the selected object are valid, that
the necessary user privileges are held, and that data transformations can occur
without errors. If invalid statements are found, you can correct the statements.
Prerequisites:
To validate related objects when changing table columns, you must have DBADM
authority.
Procedure:
To validate objects:
1. Open the Alter Table notebook: From the Control Center, expand the object tree
until you find the Tables folder. Click the Tables folder. Any existing tables are
displayed in the pane on the right side of the window. Right-click the table you
want and select Alter from the pop-up menu. The Alter Table notebook opens.
2. On the Columns page, perform one of the following actions to enable the
Related objects button:
v Rename a column
v Drop a column
v Change the data type of a column
v Change the length, scope, or precision values for a column
v Change whether a column is nullable
3. Click Related objects. The Related Objects window opens.
4. Click Test All to test the validity of the SQL statements for all of the objects
listed in the Impacted objects table. The Validity column of the table is
updated to indicate if each object is valid or invalid.
Related tasks:
v “Showing related objects” on page 270
Note: For DB2 Enterprise Server Edition databases, size estimates are based on the
logical size of the data in the table instead of by database partition.
To open the Estimate Size window, from the Control Center, expand the object tree
until you find the Tables folder or Indexes folder. Click the Tables folder or
Indexes folder. Any existing tables or indexes are displayed in the pane on the
right side of the window (contents pane).
v For an existing table or index, right-click the table or index you want in the
contents pane, and click Estimate Size in the pop-up menu.
v When you are creating a table or index, right-click the table or index folder and
click Create Table or Create Index in the pop-up menu. The Create Table wizard
Since every row added to the table affects the storage used by the indexes as well
as tables, the table and all its related indexes are displayed in the Estimate Size
window.
If recent statistics are not available for the table or index, click Run statistics before
updating the New total number of rows and New average row length fields and
running a size estimate. The calculation of the size estimate can then be based on
more accurate information. In the Run Statistics window that opens, either accept
the default or select a different value for New total number of rows.
Related concepts:
v “Space requirements for temporary tables” in Administration Guide: Planning
v “Space requirements for database objects” in Administration Guide: Planning
v “Space requirements for indexes” in Administration Guide: Planning
v “Space requirements for system catalog tables” in Administration Guide: Planning
v “Space requirements for user table data” in Administration Guide: Planning
Altering an instance
Some time after a database design has been implemented, a change to the database
design may be required. You should reconsider the major design issues that you
had with the previous design.
Before you make changes affecting the entire database, you should review all the
logical and physical design decisions. For example, when altering a table space,
you should review your design decision regarding the use of SMS or DMS storage
types.
As part of the management of licenses for your DB2 Universal Database™ (DB2
UDB) products, you may find that you have a need to increase the number of
licenses. You can use the License Center within the Control Center to check usage
of the installed products and increase the number of licenses based on that usage.
In most cases, existing instances automatically inherit or lose access to the function
of the product being installed or removed. However, if certain executables or
components are installed or removed, existing instances do not automatically
inherit the new system configuration parameters or gain access to all the additional
function. The instance must be updated.
You should ensure you understand the instances and database partition servers
you have in an instance before attempting to change or delete an instance.
Related concepts:
v “Instance creation” on page 34
Related tasks:
v “Removing instances” on page 278
v “Updating instance configuration on UNIX” on page 276
Related reference:
Procedure:
Examples:
v If you installed DB2 Workgroup Server Edition or DB2 Enterprise Server Edition
after the instance was created, enter the following command to update that
instance:
db2iupdt -u db2fenc1 db2inst1
v If you installed the DB2 Connect Enterprise Server Edition after creating the
instance, you can use the instance name as the Fenced ID also:
db2iupdt -u db2inst1 db2inst1
v To update client instances, you can use the following command:
db2iupdt db2inst1
Related tasks:
v “Removing instances” on page 278
Related reference:
v “db2ilist - List instances command” in Command Reference
v “db2iupdt - Update instances command” in Command Reference
Procedure:
Related tasks:
v “Listing instances” on page 41
v “Removing instances” on page 278
v “Updating instance configuration on UNIX” on page 276
Removing instances
Procedure:
1. Expand the object tree until you see the instance you want to remove.
2. Right-click the instance name, and select Remove from the pop-up menu.
3. Check the Confirmation box, and click OK.
The preparation and details to removing an instance using the command line are:
1. Stop all applications that are currently using the instance.
2. Stop the Command Line Processor by running db2 terminate commands in
each DB2 command window.
3. Stop the instance by running the db2stop command.
4. Back up the instance directory indicated by the DB2INSTPROF registry
variable.
On UNIX operating systems, consider backing up the files in the
INSTHOME/sqllib directory (where INSTHOME is the home directory of the
instance owner). For example, you might want to save the database manager
configuration file, db2systm, the db2nodes.cfg file, user-defined functions
(UDFs), or fenced stored procedure applications.
5. (On UNIX operating systems only) Log off as the instance owner.
6. (On UNIX operating systems only) Log in as a user with root authority.
7. Issue the db2idrop command:
db2idrop InstName
The db2idrop command removes the instance entry from the list of instances and
removes the sqllib subdirectory under the instance owner’s home directory.
Note: On UNIX operating systems, when attempting to drop an instance using the
db2idrop command, a message is generated saying that the sqllib
subdirectory cannot be removed, and in the adm subdirectory several files
with the .nfs extension are being generated. The adm subdirectory is an
NFS-mounted system and the files are controlled on the server. You must
delete the *.nfs files from the fileserver from where the directory is being
mounted. Then you can remove the sqllib subdirectory.
Related reference:
v “db2idrop - Remove instance command” in Command Reference
v “db2ilist - List instances command” in Command Reference
v “db2stop - Stop DB2 command” in Command Reference
v “STOP DATABASE MANAGER command” in Command Reference
v “TERMINATE command” in Command Reference
Note: If you modify any parameters, the values are not updated until:
v For database parameters, the first new connection to the database after all
applications are disconnected
v For database manager parameters, the next time that you stop and start
the instance
In most cases, the values recommended by the Configuration Advisor will provide
better performance than the default values because they are based on information
about your workload and your own particular server. However, the values are
designed to improve the performance of, though not necessarily optimize, your
database system. Think of the values as a starting point on which you can make
further adjustments to obtain optimized performance.
See Automatic features enabled by default for other DB2 features that are enabled
by default.
If you plan to change any database partition groups (adding or deleting database
partitions, or moving existing database partitions), the node configuration file must
be updated.
If you plan to change the database, you should review the values for the
configuration parameters. You can adjust some values periodically as part of the
ongoing changes made to the database that are based on how it is used.
Procedure:
1. Expand the object tree until you see the Databases folder.
2. Right-click the instance or database that you want to change, and click Configuration
Advisor.
3. Click each page, and change information as required.
4. Click the Results page to review any suggested changes to the configuration
parameters.
5. When you are ready to apply or save the updates, click Finish.
To use the Configuration Advisor from the command line, use the
AUTOCONFIGURE command.
To view or print the current database manager configuration parameters, use the
GET DATABASE MANAGER CONFIGURATION command.
Related concepts:
v “Benchmark testing” in Performance Guide
v “Automatic features enabled by default” in Administration Guide: Planning
Related tasks:
v “Configuring DB2 with configuration parameters” in Performance Guide
v “Changing the database configuration across multiple database partitions” on
page 281
When you have a database that is distributed across more than one database
partition, the database configuration file should be the same on all database
partitions. Consistency is required since the SQL compiler compiles distributed
SQL statements based on information in the node configuration file and creates an
access plan to satisfy the needs of the SQL statement. Maintaining different
configuration files on database partitions could lead to different access plans,
depending on which database partition the statement is prepared. Use db2_all to
maintain the configuration files across all database partitions.
Related concepts:
v “Issuing commands in a partitioned database environment” on page 130
Related tasks:
v “Changing node and database configuration files” on page 279
Altering a database
There are nearly as many tasks when altering databases as there are in the creation
of databases. These tasks update or drop aspects of the database previously
created.
1. Open the Alter Database Partition Group wizard. To open the Alter Database Partition
Group wizard: From the Control Center, expand the object tree until you find the
Database Partition Groups folder. Click the Database Partition Groups folder. Any
existing database partition groups are displayed in the contents pane on the right.
Right-click the database partition group you want to change and select Alter from the
pop-up menu. The Alter Database Partition Group wizard opens.
You can also open the Alter Database Partition Group window from the Storage
Management view. To open the Storage Management view: From the Control Center
window, expand the object tree until you find the database, database partition group, or
table space you want to examine in the Storage Management view. Right-click the
desired database and select Manage Storage from the pop-up menu. The Storage
Management view opens.
Note: The first time you launch the Storage Management view from an object you will
need to specify your settings in the Storage Management Setup launchpad.
2. Complete each of the applicable wizard pages. The Finish push button is enabled when
you complete enough information for the wizard to alter the database partition group.
Once you add or drop database partitions, you must redistribute the current data
across the new set of database partitions in the database partition group.
Related concepts:
v “Data redistribution” in Performance Guide
v “Management of database server capacity” on page 29
Related tasks:
v “Redistributing data across database partitions” in Performance Guide
Related reference:
v “REDISTRIBUTE DATABASE PARTITION GROUP command” in Command
Reference
Using the Database Partitions view you can restart a database partition, take a
database partition out of the rollforward pending state, backup a database
partition, restore a database partition, or configure a database partition using the
Configuration Advisor.
Authorities:
To work with database partitions you will need authority to attach to an instance.
Anyone with SYSADM or DBADM authority can grant you with the authority to
access a specific instance.
Procedure:
Related concepts:
v “Adding database partitions in a partitioned database environment” on page 123
Related tasks:
v “Adding a database partition server to an instance (Windows)” on page 144
v “Adding a database partition to a running database system” on page 119
v “Changing the database configuration across multiple database partitions” on
page 281
Prerequisites:
Procedure:
1. Open the Alter Buffer Pool window: From the Control Center, expand the object tree
until you find the Buffer Pools folder. Click on the Buffer Pools folder. Any existing
buffer pools are displayed in the pane on the right side of the window. Right-click the
buffer pool you want and select Alter from the pop-up menu. The Alter Buffer Pool
window opens.
2. To change the size of a buffer pool, type a new value.
3. Optional: Specify whether to use the default buffer pool size.
4. Optional: Specify whether to alter the buffer pool immediately (this is the default
setting), or whether to alter it the next time that the database is restarted.
Note: Two key parameters are IMMEDIATE and DEFERRED. With IMMEDIATE, the
buffer pool size is changed without delay. If there is insufficient reserved
space in the database shared memory to allocate new space, the
statement is run as deferred.
Related concepts:
v “Self tuning memory” in Performance Guide
Related tasks:
v “Creating a buffer pool” on page 166
Related reference:
v “ALTER BUFFERPOOL statement” in SQL Reference, Volume 2
When you create a database, you create at least three table spaces: one catalog
table space (SYSCATSPACE); one user table space (with a default name of
USERSPACE1); and one system temporary table space (with a default name of
TEMPSPACE1). You must keep at least one of each of these table spaces. You can
add additional user and temporary table spaces if you want.
Note: You cannot drop the catalog table space SYSCATSPACE, nor create another
one; and there must always be at least one system temporary table space
with a page size of 4 KB. You can create other system temporary table
spaces. You also cannot change the page size or the extent size of a table
space after it has been created.
Procedure:
1. open the Alter Table Space notebook: From the Control Center, expand the object tree
until you find the Table Spaces folder. Click on the Table Spaces folder. Any existing
table spaces are displayed in the pane on the right side of the window. Right-click on
the table space you want in the contents pane and select Alter from the pop-up menu.
The Alter Table Space notebook opens.
2. Optional: Change the comment.
3. Optional: Specify the name of the buffer pool in which this table space should reside.
The page size of the buffer pool you select must be equal to the page size of the table
space.
To alter a table space using the command line, use the ALTER TABLESPACE
statement.
Related tasks:
v “Adding a container to a DMS table space” on page 285
v “Adding a container to an SMS table space on a database partition” on page 289
v “Dropping a system temporary table space” on page 292
Related reference:
v “ALTER TABLESPACE statement” in SQL Reference, Volume 2
When new containers are added to a table space, or existing containers are
extended, a rebalance of the table space might occur. The process of rebalancing
involves moving table space extents from one location to another. During this
process, the attempt is made to keep data striped within the table space.
Rebalancing does not necessarily occur across all containers but depends on many
factors such as on the existing container configuration, the size of the new
containers, and how full is the table space.
When containers are added to an existing table space, they might be added such
that they do not start in stripe 0. Where they start in the map is determined by the
database manager and is based on the size of the containers being added. If the
container being added is not large enough, it is positioned such that it ends in the
last stripe of the map. If it is large enough, it is positioned to start in stripe 0.
No rebalancing occurs if you are adding new containers and creating a new stripe
set. A new stripe set is created using the BEGIN NEW STRIPE SET clause on the
ALTER TABLESPACE statement. You can also add containers to existing stripe sets
using the ADD TO STRIPE SET clause on the ALTER TABLESPACE statement.
Access to the table space is not restricted during the rebalancing. If you need to
add more than one container, you should add them at the same time.
Procedure:
1. Expand the object tree until you see the Table Spaces folder.
2. Right-click the table space where you want to add the container, and select Alter from
the pop-up menu.
3. Click Add, complete the information, and click Ok.
To add a container to a DMS table space using the command line, enter:
ALTER TABLESPACE <name>
ADD (DEVICE ’<path>’ <size>, FILE ’<filename>’ <size>)
The following example illustrates how to add two new device containers (each
with 10 000 pages) to a table space on a Linux and UNIX system:
Chapter 5. Altering a database 285
ALTER TABLESPACE RESOURCE
ADD (DEVICE ’/dev/rhd9’ 10000,
DEVICE ’/dev/rhd10’ 10000)
Note that the ALTER TABLESPACE statement allows you to change other
properties of the table space that can affect performance.
Related concepts:
v “How containers are added and extended in DMS table spaces” in Administration
Guide: Planning
v “Table space impact on query optimization” in Performance Guide
Related tasks:
v “Adding a container to an SMS table space on a database partition” on page 289
Related reference:
v “ALTER TABLESPACE statement” in SQL Reference, Volume 2
Restrictions:
Each raw device can only be used as one container. The raw device size is fixed
after its creation. When you are considering to use the resize or extend options to
increase a raw device container, you should check the raw device size first to
ensure that you do not attempt to increase the device container size larger than the
raw device size.
Procedure:
To increase the size of one or more containers in a DMS table space using the
Control Center:
1. Expand the object tree until you see the Table Spaces folder.
2. Right-click the table space where you want to add the container, and select Alter from
the pop-up menu.
3. Click Resize, complete the information, and click OK.
You can also drop existing containers from a DMS table space, reduce the size of
existing containers in a DMS table space, and add new containers to a DMS table
space without requiring a rebalance of the data across all of the containers.
The dropping of existing table space containers as well as the reduction in size of
existing containers is only allowed if the number of extents being dropped or
reduced in size is less than or equal to the number of free extents above the
high-water mark in the table space. The high-water mark is the page number of
the highest allocated page in the table space. This mark is not the same as the
number of used pages in the table space because some of the extents below the
high-water mark might have been made available for reuse.
To reduce the size of existing containers, you can use either the RESIZE option or
the REDUCE option. When using the RESIZE option, all of the containers listed as
part of the statement must either be increased in size, or decreased in size. You
cannot increase some containers and decrease other containers in the same
statement. You should consider the resizing method if you know the new lower
limit for the size of the container. You should consider the reduction method if you
do not know (or care about) the current size of the container.
To decrease the size of one or more containers in a DMS table space using the
command line, enter:
ALTER TABLESPACE <name>
REDUCE (FILE ’<filename>’ <size>)
The following example illustrates how to reduce a file container (which already
exists with 1 000 pages) in a table space on a Windows-based system:
ALTER TABLESPACE PAYROLL
REDUCE (FILE ’d:\hldr\finance’ 200)
Following this action, the file is decreased from 1 000 pages in size to 800 pages.
To increase the size of one or more containers in a DMS table space using the
command line, enter:
ALTER TABLESPACE <name>
RESIZE (DEVICE ’<path>’ <size>)
The following example illustrates how to increase two device containers (each
already existing with 1 000 pages) in a table space on a Linux and UNIX system:
ALTER TABLESPACE HISTORY
RESIZE (DEVICE ’/dev/rhd7’ 2000,
DEVICE ’/dev/rhd8’ 2000)
Following this action, the two devices have increased from 1 000 pages in size to
2 000 pages. The contents of the table space might be rebalanced across the
containers. Access to the table space is not restricted during the rebalancing.
To extend one or more containers in a DMS table space using the command line,
enter:
ALTER TABLESPACE <name>
EXTEND (FILE ’<filename>’ <size>)
The following example illustrates how to increase file containers (each already
existing with 1 000 pages) in a table space on a Windows-based system:
ALTER TABLESPACE PERSNEL
EXTEND (FILE ’e:\wrkhist1’ 200
FILE ’f:\wrkhist2’ 200)
DMS containers (both file and raw device containers) which are added during or
after table space creation, or are extended after table space creation, are performed
in parallel through prefetchers. To achieve an increase in parallelism of these create
or resize container operations, you can increase the number of prefetchers running
in the system. The only process which is not done in parallel is the logging of
these actions and, in the case of creating containers, the tagging of the containers.
Note that the ALTER TABLESPACE statement allows you to change other
properties of the table space that can affect performance.
Related reference:
v “ALTER TABLESPACE statement” in SQL Reference, Volume 2
The DB2 database manager is set up so that the automatic prefetch size is the
default for any table spaces created using Version 8.2 (and later). The DB2 database
manager uses the following formula to calculate the prefetch size for the table
space:
prefetch size = (number of containers) X (number of physical spindles per
container) X extent size
There are three ways not to have the prefetch size of the table space set at
AUTOMATIC:
v Create the table space with a specific prefetch size. Manually choosing a value
for the prefetch size indicates that you will remember to adjust, if necessary, the
prefetch size whenever there is an adjustment in the number of containers
associated with the table space.
v Do not use prefetch size when creating the table space, and have the
dft_prefetch_sz database configuration parameter set to a non-AUTOMATIC
value. The DB2 database manager checks this parameter when there is no
explicit mention of the prefetch size when creating the table space. If a value
other than AUTOMATIC is found, then that value is what is used as the default
Use of DB2_PARALLEL_IO
Prefetch requests are broken down into several smaller prefetch requests based on
the parallelism of a table space, and before the requests are submitted to the
prefetch queues. The DB2_PARALLEL_IO registry variable is used to define the
number of physical spindles per container as well as influencing the parallel I/O
on the table space. With parallel I/O off, the parallelism of a table space is equal to
the number of containers. With parallel I/O on, the parallelism of a table space is
equal to the number of container multiplied by the value given in the
DB2_PARALLEL_IO registry variable. (Another way of saying this is, the
parallelism of the table space is equal to the prefetch size divided by the extent
size of the table space.)
Related tasks:
v “Adding a container to a DMS table space” on page 285
v “Altering a table space” on page 284
v “Modifying containers in a DMS table space” on page 286
You can only add a container to a SMS table space on a database partition that
currently has no containers.
Procedure:
The database partition specified by number, and every partition in the range of
database partitions, must exist in the database partition group on which the table
space is defined. A database partition_number might only appear explicitly or
within a range in exactly one db-partitions-clause for the statement.
The following example shows how to add a new container to database partition
number 3 of the database partition group used by table space “plans” on a UNIX
based operating system:
ALTER TABLESPACE plans
ADD (’/dev/rhdisk0’)
ON DBPARTITIONNUM (3)
Related tasks:
v “Adding a container to a DMS table space” on page 285
v “Modifying containers in a DMS table space” on page 286
Related reference:
v “ALTER TABLESPACE statement” in SQL Reference, Volume 2
When restoring a table space that has been renamed since it was backed up, you
must use the new table space name in the RESTORE DATABASE command. If you
use the previous table space name, it will not be found. Similarly, if you are rolling
forward the table space with the ROLLFORWARD DATABASE command, ensure
that you use the new name. If the previous table space name is used, it will not be
found.
Procedure:
You can give an existing table space a new name without being concerned with the
individual objects within the table space. When renaming a table space, all the
catalog records referencing that table space are changed.
1. Open the Rename Table Space window: From the Control Center, expand the object tree
until you find the Table Spaces folder. Click on the Table Spaces folder. Any existing
table spaces are displayed in the pane on the right side of the window. Right-click on
the table space you want and select Rename from the pop-up menu. The Rename Table
Space window opens.
2. Type a new name for the table space.
Related reference:
v “RENAME TABLESPACE statement” in SQL Reference, Volume 2
The SWITCH ONLINE clause of the ALTER TABLESPACE statement can be used
to remove the OFFLINE state from a table space if the containers associated with
that table space have become accessible. The table space has the OFFLINE state
removed while the rest of the database is still up and being used.
An alternative to the use of this clause is to disconnect all applications from the
database and then to have the applications connect to the database again. This
removes the OFFLINE state from the table space.
To remove the OFFLINE state from a table space using the command line, enter:
db2 ALTER TABLESPACE <name>
SWITCH ONLINE
Related reference:
v “ALTER TABLESPACE statement” in SQL Reference, Volume 2
You can reuse the containers in an empty table space by dropping the table space,
but you must COMMIT the DROP TABLESPACE command before attempting to
reuse the containers.
You can drop a user table space that contains all of the table data including index
and LOB data within that single user table space. You can also drop a user table
space that might have tables spanned across several table spaces. That is, you
might have table data in one table space, indexes in another, and any LOBs in a
third table space. You must drop all three table spaces at the same time in a single
statement. All of the table spaces that contain tables that are spanned must be part
of this single statement or the drop request will fail.
Procedure:
1. Expand the object tree until you see the Table Spaces folder.
2. Right-click on the table space you want to drop, and select Drop from the pop-up
menu.
3. Check the Confirmation box, and click Ok.
Related tasks:
v “Dropping a system temporary table space” on page 292
v “Dropping a user temporary table space” on page 293
Related reference:
v “COMMIT statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
You cannot drop a system temporary table space that has a page size of 4 KB
without first creating another system temporary table space. The new system
temporary table space must have a page size of 4 KB because the database must
always have at least one system temporary table space that has a page size of 4
KB. For example, if you have a single system temporary table space with a page
size of 4 KB, and you want to add a container to it, and it is an SMS table space,
you must first add a new 4 KB page size system temporary table space with the
proper number of containers, and then drop the old system temporary table space.
(If you were using DMS, you could add a container without having to drop and
recreate the table space.)
Procedure:
1. Expand the object tree until you see the Table Spaces folder.
2. If there is only one other system temporary table space, right-click the Table Spaces
folder, and select Create —> Table Space Using Wizard from the pop-up menu.
Otherwise, skip to step four.
3. Follow the steps in the wizard to create the new system temporary table space if
needed.
4. Click again on the Table Spaces folder to display a list of table spaces in the right side
of the window (the Contents pane).
5. Right-click on the system temporary table space you want to drop, and click Drop from
the pop-up menu.
6. Check the Confirmation box, and click OK.
Then, to drop a system table space using the command line, enter:
DROP TABLESPACE <name>
The following SQL statement creates a new system temporary table space called
TEMPSPACE2:
Once TEMPSPACE2 is created, you can then drop the original system temporary
table space TEMPSPACE1 with the command:
DROP TABLESPACE TEMPSPACE1
You can reuse the containers in an empty table space by dropping the table space,
but you must COMMIT the DROP TABLESPACE command before attempting to
reuse the containers.
Related tasks:
v “Dropping a user table space” on page 291
v “Dropping a user temporary table space” on page 293
Related reference:
v “CREATE TABLESPACE statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
You can only drop a user temporary table space if there are no declared temporary
tables currently defined in that table space. When you drop the table space, no
attempt is made to drop all of the declared temporary tables in the table space.
Note: A declared temporary table is implicitly dropped when the application that
declared it disconnects from the database.
Related tasks:
v “Dropping a system temporary table space” on page 292
v “Dropping a user table space” on page 291
Related reference:
v “DROP statement” in SQL Reference, Volume 2
Dropping a database
Procedure:
Although some of the objects in a database can be altered, the database itself
cannot be altered: it must be dropped and re-created. Dropping a database can
have far-reaching effects, because this action deletes all its objects, containers, and
associated files. The dropped database is removed (uncataloged) from the database
directories.
1. Expand the object tree until you see the Databases folder.
2. Right-click the database you want to drop, and select Drop from the pop-up menu.
3. Click on the Confirmation box, and click Ok.
Note: If you intend to continue experimenting with the SAMPLE database, you
should not drop it. If you have dropped the SAMPLE database, and find
that you need it again, you can re-create it.
To drop a database from a client application, call the sqledrpd API. To drop a
database at a specified database partition server, call the sqledpan API.
Related reference:
v “DROP DATABASE command” in Command Reference
v “GET SNAPSHOT command” in Command Reference
v “LIST ACTIVE DATABASES command” in Command Reference
Dropping a schema
Before dropping a schema, all objects that were in that schema must be dropped
themselves or moved to another schema. The schema name must be in the catalog
when attempting the DROP statement; otherwise an error is returned.
Procedure:
1. Expand the object tree until you see the Schemas folder.
2. Right-click on the schema you want to drop, and select Drop from the pop-up menu.
3. Check the Confirmation box, and click Ok.
The RESTRICT keyword enforces the rule that no objects can be defined in the
specified schema for the schema to be deleted from the database, and it must be
specified.
Related reference:
v “DROP statement” in SQL Reference, Volume 2
v “ADMIN_DROP_SCHEMA procedure – Drop a specific schema and its objects”
in Administrative SQL Routines and Views
Note that you cannot alter triggers for tables; you must drop any trigger that is no
longer appropriate (see “Dropping a trigger” on page 329), and add its
replacement (see “Creating triggers” on page 240).
Modifying tables
This section discusses various aspects of modifying tables, including space value
compression. It covers how to change table attributes and properties, columns and
rows, and keys and constraints.
Similarly, an existing table can be changed from a record format that allows space
compression to a record format that does not. The same condition regarding the
sum of the byte counts of the columns applies; and the error message SQL0670N is
returned as necessary.
To determine if you should consider space compression for your table, you should
know that a table with the majority of values equal to the system default values, or
NULL, would benefit from the new row format. For example, where there is an
INTEGER column and 90% of the column has values of 0 (the default value for the
data type INTEGER), or NULL, compressing this table plus this column would
benefit from the new row format and save a lot of disk space.
When altering a table, you can use the VALUE COMPRESSION clause to specify
that the table is using the space row format at the table level and possibly at the
column level. You would use ACTIVATE VALUE COMPRESSION to specify that
the table will use the space saving techniques or you would use DEACTIVATE
VALUE COMPRESSION to specify that the table will no longer use space saving
techniques for data in the table.
If you use DEACTIVATE VALUE COMPRESSION, this will implicitly disable any
COMPRESS SYSTEM DEFAULT options associated with columns in that table.
After modifying the table to a new row format, all subsequent rows inserted,
loaded, or updated will have the new row format. To have every row modified to
Related concepts:
v “Data row compression” on page 188
v “Space compression for tables” on page 187
v “Space value compression for new tables” on page 187
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
Copying tables
A basic COPY performs a simple copy of one table to another. This action is a
one-time copy only. A new table is defined based on the definition of the selected
table, and the contents are copied to the new table. Options that are not copied
include:
v Check constraints
v Column default values
v Column comments
v Foreign keys
v Logged and compact option on BLOB columns
v Distinct types
A new table is defined based on the definition of the selected table, and the
contents are copied to the new table. You can copy a table into the same database
or a different database.
Prerequisites:
Procedure:
1. Open the Copy Table window: From the Control Center, expand the object tree until
you find the Tables folder. Click on the Tables folder. Any existing tables are displayed
in the pane on the right side of the window. Right-click the table you want to copy and
select Copy from the pop-up menu. The Copy Table window opens.
2. Specify the name of an existing host or server for the target table and the instance that
contains the database that will contain the target table.
3. Specify the database that will contain the target table, the schema for the target table,
and a unique name for the target table. If a table with the same name already exists in
the schema, the copy will fail.
4. Optional: Select a table space for the target table. Select a REGULAR DMS table space
other than the default table space if you want to specify an index table space or long
data table space.
5. Optional: Select a table space in which to create any indexes on the target table.
6. Optional: Select a table space in which to store the values of any long columns in the
target table.
When you click OK, the table that you selected is copied to the target table.
To copy a table using the command line, use the EXPORT and IMPORT
commands.
Related concepts:
v “About databases” in Administration Guide: Planning
v “About systems” in Administration Guide: Planning
v “Instance creation” on page 34
v “Replicated materialized query tables” in Administration Guide: Planning
Related reference:
v “EXPORT command” in Command Reference
v “IMPORT Command” in Command Reference
Altering a table
Use the Alter Table notebook or the ALTER TABLE statement to alter the row
format of table data.
Prerequisites:
To alter a table, you must have one of the following authorities or privileges:
v ALTER privilege
v CONTROL privilege
v SYSADM authority
v DBADM authority
v ALTERIN privilege on the table schema
Procedure:
1. Open the Alter Table notebook: From the Control Center, expand the object tree until
you find the Tables folder. Click the Tables folder. Any existing tables are displayed in
the pane on the right side of the window. Right-click the table that you want and select
Alter from the pop-up menu. The Alter Table notebook opens.
2. Specify the required information to do the following:
v Change table properties
v Add new columns or change existing columns
v Define new primary keys or change existing primary keys
v Add new foreign keys or change existing foreign keys
v Add new check constraints or change existing check constraints
v Add new partitioning keys or change existing partitioning keys
v Manage table partitions
For more information, refer to the online help.
To alter a table using the command line, use the ALTER TABLE statement.
Related concepts:
v “Using a stored procedure to alter a table” on page 324
v “Using the ALTER TABLE statement to alter columns of a table” on page 300
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “ALTOBJ procedure” in Administrative SQL Routines and Views
The amount of free space to be left on each page of a table is specified through
PCTFREE, and is an important consideration for the effective use of clustering
indexes. The amount to specify depends on the nature of the existing data and
expected future data. PCTFREE is respected by LOAD and REORG but is ignored
by insert, update and import activities.
Setting PCTFREE to a larger value will maintain clustering for a longer period, but
will also require more disk space.
You can specify the size (granularity) of locks used when the table is accessed by
using the LOCKSIZE parameter. By default, when the table is created, row level
locks are defined. For partitioned tables, this lock strategy is applied to both the
table lock and the data partition locks for any data partitions accessed. Use of table
level locks might improve the performance of queries by limiting the number of
locks that need to be acquired and released.
For multidimensional clustering (MDC) tables, using the BLOCKINSERT value for
LOCKSIZE causes block-level locking to occur during INSERT operations and
For example, after an ALTER TABLE ... LOCKSIZE BLOCKINSERT operation, insertions
into MDC tables usually cause block locking and not row locking. The only
row-level locking that occurs is the next-key locking. This locking is required when
the insertion of a record’s key into a RID index must wait for a repeatable-read
(RR) scan to commit or roll-back before proceeding. This process maintains RR
semantics. This option should not be used when there might be multiple
transactions inserting data into the same cell concurrently. Exceptions can occur
when each transaction has sufficient data to insert per cell and you are not
concerned that separate blocks are used for each transaction. In this case, there will
be some partially-filled blocks for the cell. This situation causes the cell to be larger
than it would otherwise be.
By specifying APPEND ON, you can improve the overall performance of the table.
Using this option allows for faster insertions, while eliminating the maintenance of
information about the free space.
A table with a clustering index cannot be altered to have append mode turned on.
Similarly, a clustering index cannot be created on a table with append mode.
Related concepts:
v “Factors that affect locking” in Performance Guide
v “Preventing lock-related performance issues” in Performance Guide
v “Lock attributes” in Performance Guide
v “Locks and concurrency control” in Performance Guide
v “Lock granularity” in Performance Guide
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
Prerequisites:
To alter a table, you must have at least one of the following privileges on the table
to be altered:
v ALTER privilege
v CONTROL privilege
v SYSADM or DBADM authority
v ALTERIN privilege on the schema of the table
Note: To change the definition of a existing column (in a database that is Version
8.2 or greater), you must have DBADM authority.
Procedure:
1. Expand the object tree until you find the Tables folder. Click the Tables folder. Any
existing tables are displayed in the pane on the right side of the window. Right-click
the table you want and select Alter from the pop-up menu. The Alter Table notebook
opens.
2. On the Table tab:
v Type a new comment or edit the existing comment.
v Select a lock size to specify the use of row locks or table locks when accessing the
table. Use of the Lock size does not prevent normal lock escalation.
Attention: Your new lock size selection is saved in the system but will not be
displayed the next time you view this field.
v Select a value to change the percentage of each page to be left as free space during
load or reorganization.
v Indicate whether extra information regarding changes to this table will be written to
the log if this table is replicated.
v Indicate whether to extend the data capture for propagation function to include long
varchar and long vargraphic columns in the log.
Attention: You must first select the Data capture for propagation check box to
enable the Include long variable length columns check box.
v Indicate whether data is to be appended to the end of the table data.
v Indicate to the optimizer that the cardinality of your table can vary significantly at
run time.
v Specify how index build logging should be performed:
– No change specifies that no change will be made to the LOG INDEX BUILD table
attribute
– NULL specifies that the amount of index build information logged for this table
will depend on the value of the LOGINDEXBUILD database configuration
parameter.
– ON specifies that enough index build data will be logged for this table to
reconstruct indexes during DB2 rollforward or HADR log replay.
– OFF specifies that minimal index build data will be logged.
To alter table properties using the command line, use the ALTER TABLE
statement.
Related concepts:
v “Primary keys” in Administration Guide: Planning
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
It is important that you plan the implementation of the table alterations well.
Perhaps the most important thing to realize when running an ALTER TABLE
statement containing a REORG-recommended operation is that once the ALTER
TABLE statement has executed, the table will be placed in the Reorg Pending state.
This means that the table is inaccessible for almost all operations until you perform
a REORG. See the ALTER TABLE statement in the SQL Reference for the complete
list of ALTER TABLE operations, some of which are also called
REORG-recommended operations.
REORG TABLE
DROP TABLE
ALTER TABLE
RENAME TABLE
TRUNCATE TABLE
To allow data recovery in case of a REORG failure, table data might be read using
scan-based read only statements, that is, using TABLE SCAN statements. In addition,
index-based table access is not allowed. If a table scan-based access is used instead
of index-based access, you can also issue a SELECT statement from the table.
The following ALTER TABLE statements require row data validation, and are not
allowed following a REORG-recommended ALTER. However, you can execute
most of the other ALTER TABLE statements. The ALTER TABLE statements that
you cannot use are those that require scanning of column data to verify the validity
of the alteration operations. Specifically, this means that you cannot execute the
following statements on a table:
ADD UNIQUE CONSTRAINT
ADD CHECK CONSTRAINT
ADD REFERENTIAL CONSTRAINT
ALTER COLUMN SET NOT NULL
ALTER TABLE ADD REFERENTIAL CONSTRAINT
ALTER TABLE ADD CONSTRAINT
ALTER TABLE ADD UNIQUE CONSTRAINT
You could, however, replace the three ALTER TABLE statements with a single one:
ALTER TABLE foo DROP COLUMN C1 DROP COLUMN C2 DROP COLUMN C3
Since you can alter only one attribute per column in a single SQL statement--for
example, type or nullability—it is possible that changing a column to a new format
could require the use of more than one ALTER TABLE statement containing
REORG-recommended operations. In such a case, it is important that the order of
alterations not allow one alteration to preclude another due to the Reorg Pending
state. This means that you should perform operations requiring table data access
using the first ALTER TABLE statement containing REORG-recommended
operations. For example, if column C1 is an integer and is NULLABLE and you
want to change this column to be a NOT NULLABLE BIGINT, the following
sequence will fail:
ALTER TABLE bar ALTER COLUMN C1 SET DATA TYPE BIGINT
ALTER TABLE bar ALTER COLUMN C1 SET NOT NULL
The reason for the failure is that the second ALTER TABLE statement requires a
scan of the column C1 to see whether any rows contain the value NULL. Since the
table is placed in Reorg Pending state after the first statement, the scan for the
second statement cannot be performed.
However, the following sequence will succeed because the first statement does not
access the data and does not put the table in Reorg Pending state:
You can perform many operations that alter a table that do not constitute
REORG-recommended operations regardless of the number of
REORG-recommended operations that you have specified. These include:
ADD COLUMN
ALTER COLUMN DEFAULT VALUE
RENAME TABLE
ALTER COLUMN SET DATA TYPE VARCHAR/VARGRAPHIC/CLOB/BLOB/DBCLOB
Related concepts:
v “Using a stored procedure to alter a table” on page 324
v “Table reorganization” in Performance Guide
Related tasks:
v “Altering a table” on page 297
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
v “REORG INDEXES/TABLE command” in Command Reference
A column definition includes a column name, data type, and any necessary
constraints.
When columns are added to a table, the columns are logically placed to the right
of the right-most existing column definition. When a new column is added to an
existing table, only the table description in the system catalog is modified, so
access time to the table is not affected immediately. Existing records are not
physically altered until they are modified using an UPDATE statement. When
retrieving an existing row from the table, a null or default value is provided for
the new column, depending on how the new column was defined. Columns that
are added after a table is created cannot be defined as NOT NULL: they must be
defined as either NOT NULL WITH DEFAULT or as nullable.
1. Expand the object tree until you see the Tables folder.
2. Right-click on the table you want to add columns to, and select Alter from the pop-up
menu.
3. Check the Columns page, complete the information for the column, and click Ok.
Columns can be added with an SQL statement. The following statement uses the
ALTER TABLE statement to add three columns to the EMPLOYEE table:
ALTER TABLE EMPLOYEE
ADD MIDINIT CHAR(1) NOT NULL WITH DEFAULT
ADD HIREDATE DATE
ADD WORKDEPT CHAR(3)
Related tasks:
v “Modifying a column definition” on page 305
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
If you are altering a pre-version 8.2 database, you can use the Column Properties
window to change the comment for existing columns in a table or change the
length of an existing VARCHAR column. You can also change the formula that
DB2 uses to determine values for a generated column.
Prerequisites:
To change the definition of a existing column, to edit and test SQL when changing
table columns, or to validate related objects when changing table columns, you
must have DBADM authority.
Procedure:
1. Open the Alter Table notebook: From the Control Center, expand the object tree until
you find the Tables folder. Click the Tables folder. Any existing tables are displayed in
the pane on the right side of the window. Right-click the table you want to change and
select Alter from the pop-up menu. The Alter Table notebook opens.
2. On the Columns page, select a column and click Change. The Change Columns or
Change Properties window opens.
3. Make the necessary changes. For more information, refer to the online help for this
window.
To change column properties using the command line, use the ALTER TABLE
statement. For example:
ALTER TABLE EMPLOYEE
ALTER COLUMN WORKDEPT
SET DEFAULT ’123’
Related concepts:
v “Using the ALTER TABLE statement to alter columns of a table” on page 300
Related tasks:
v “Adding columns to an existing table” on page 304
v “Defining a generated column on a new table” on page 219
v “Defining a generated column on an existing table” on page 321
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
Note: Generate columns cannot have their default value altered by this statement.
When changing these table attributes using SQL, it is no longer necessary to drop
the table and then recreate it, a time consuming process that can be complex when
object dependencies exist.
Procedure:
To modify the length of a column of an existing table using the Control Center:
1. Expand the object tree until you see the Tables folder.
2. In the list of tables in the right pane, right-click on the table for which you want to
modify a column, and select Alter from the pop-up menu.
3. Check the Columns page, select the column, and click Change.
4. Type the new byte count for the column in Length, and click Ok.
To modify the length and type of a column of an existing table using the command
line, enter:
ALTER TABLE <table_name>
ALTER COLUMN <column_name>
<modification_type>
You cannot alter the column of a typed table. However, you can add a scope to an
existing reference type column that does not already have a scope defined. For
example:
ALTER TABLE t1
ALTER COLUMN colnamt1
ADD SCOPE typtab1
To modify the default value of a column of an existing table using the command
line, enter:
ALTER TABLE <table_name>
ALTER COLUMN <column_name>
SET DEFAULT 'new_default_value'
For example, to change the default value for a column, use something similar to
the following:
Related tasks:
v “Modifying an identity column definition” on page 308
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
Procedure:
If the table being modified is involved with other tables through referential
constraints then there are considerations with carrying out the deletion of rows. If
the identified table or the base table of the identified view is a parent, the rows
selected for delete must not have any dependents in a relationship with a delete
rule of RESTRICT. Further, the DELETE must not cascade to descendent rows that
have dependents in a relationship with a delete rule of RESTRICT.
If the delete operation is not prevented by a RESTRICT delete rule, the selected
rows are deleted.
For example, to delete the department (DEPTNO) “D11” from the table
(DEPARTMENT), use:
DELETE FROM department WHERE deptno=’D11’
If an error occurs during the running of a multiple row DELETE, no changes are
made to the table. If an error occurs that prevents deleting all rows matching the
search condition and all operations required by existing referential constraints, no
changes are made to the tables.
Unless appropriate locks already exist, one or more exclusive locks are acquired
during the running of a successful DELETE statement. Locks are released following
a COMMIT or ROLLBACK statement. Locks can prevent other applications from
performing operations on the table.
Related concepts:
v “Factors that affect locking” in Performance Guide
v “Preventing lock-related performance issues” in Performance Guide
v “Locks and concurrency control” in Performance Guide
v “Lock granularity” in Performance Guide
Related tasks:
v “Defining a generated column on a new table” on page 219
v “Defining an identity column on a new table” on page 220
If you are recreating a table followed by an import or load operation, and if you
have an IDENTITY column in the table then it will be reset to start generating the
IDENTITY value from 1 following the recreation of the contents of the table. When
inserting new rows into this recreated table, you do not want the IDENTITY
column to begin from 1 again. You do not want duplicate values in the IDENTITY
column. To prevent this from occuring, you should:
1. Recreate the table.
2. Load data into the table using the MODIFIED BY IDENTITYOVERRIDE clause.
The data is loaded into the table but no identity values are generated for the
rows.
3. Run a query to get the last counter value for the IDENTITY column:
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “MAX aggregate function” in SQL Reference, Volume 1
v “LOAD command” in Command Reference
1. Expand the object tree until you see the Tables folder.
2. Right-click the table you want to modify, and select Alter from the pop-up menu.
3. On the Keys page, select one or more columns as primary keys.
4. Optional: Enter the constraint name of the primary key.
Related tasks:
v “Adding foreign keys” on page 310
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “SET INTEGRITY statement” in SQL Reference, Volume 2
Prerequisites:
To alter a table with a primary key, you must have at least one of the following
privileges on the table to be altered:
Chapter 6. Altering tables and other related table objects 309
v ALTER privilege
v CONTROL privilege
v SYSADM or DBADM authority
v ALTERIN privilege on the schema of the table
Procedure:
1. Open Change Primary Key window: From the Control Center, expand the object tree
until you find the Tables folder. Right-click the Tables folder and select Create from the
pop-up menu. The Create Table wizard opens. On the Keys page, select a primary key
in the table and click Change. The Change Primary Key window opens.
2. Select the column or columns that you want to define as primary key columns. You can
define up to 16 columns to be primary key columns.
3. Optional: Type the constraint name of the primary key.
To change primary keys using the command line, use the ALTER TABLE
statement.
Related tasks:
v “Adding primary keys” on page 309
v “Dropping primary keys” on page 316
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
Procedure:
1. Expand the object tree until you see the Tables folder.
2. Right-click on the table you want to modify, and select Alter from the pop-up menu.
3. On the Keys page, click Add.
4. On the Add Foreign Keys window, specify the parent table information.
5. Select one or more columns to be foreign keys.
6. Specify what action is to take place on the dependent table when a row of the parent
table is deleted or updated. You can also add a constraint name for he foreign key.
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related tasks:
v “Adding primary keys” on page 309
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
Prerequisites:
To alter a table with a foreign key, you must have at least one of the following
privileges on the table to be altered:
v ALTER privilege
v CONTROL privilege
v SYSADM or DBADM authority
v ALTERIN privilege on the schema of the table
Procedure:
1. Open the Alter Table notebook if you are adding a unique key to a table: From the
Control Center, expand the object tree until you find the Tables folder. Click the Tables
folder. Any existing tables are displayed in the pane on the right side of the window.
Right-click the table you want in the contents pane and select Alter from the pop-up
menu. The Alter Table notebook opens.
If you are altering a foreign key on a nickname, open the Alter Nickname notebook.
2. On the Keys page, select a foreign key and click Change. The Change Foreign Key
window opens.
3. Optional: Select a different parent table or nickname.
4. Specify the schema and name of the new parent table or nickname.
5. Optional: Select a new foreign key.
6. Optional: Change the action specified for ″on delete″ and ″on update″.
7. Optional: Change the name of the constraint
To change foreign keys using the command line, use the ALTER TABLE statement.
Related concepts:
v “Foreign keys in a referential constraint” on page 226
Related tasks:
v “Adding foreign keys” on page 310
v “Dropping foreign keys” on page 316
Procedure:
1. Open the Add Unique Key window: From the Control Center, expand the object tree
until you find the Tables folder. Click the Tables folder. Any existing tables are
displayed in the pane on the right side of the window (the contents pane). Right-click
the table you want in the contents pane and select Alter from the pop-up menu. The
Alter Table notebook opens. If you are adding a unique key to a nickname, open the
Alter Nickname notebook. On the Keys page, click Add. The Add Unique Key window
opens.
2. Select the column or columns that you want to define or change as unique key
columns.
3. Optional: Type the constraint name of the unique key.
To add unique keys using the command line, use the ALTER TABLE statement.
Related tasks:
v “Changing unique keys” on page 313
Procedure:
1. Open the Alter Table notebook if you are adding a unique key to a table: From the
Control Center, expand the object tree until you find the Tables folder. Click the Tables
folder. Any existing tables are displayed in the pane on the right side of the window.
Right-click the table you want in the contents pane and select Alter from the pop-up
menu. The Alter Table notebook opens.
If you are adding a unique key to a nickname, open the Alter Nickname notebook.
2. On the Keys page, select a unique key from the table and click Change. The Change
Unique Key window opens.
3. Select the column or columns that you want to define as unique key columns.
4. Optional: Type the constraint name of the unique key.
To change unique keys using the command line, use the ALTER TABLE statement.
Related tasks:
v “Adding unique keys” on page 312
Procedure:
1. Open the Alter Table notebook: From the Control Center, expand the object tree until
you find the Tables folder. Click the Tables folder. Any existing tables are displayed in
the pane on the right side of the window. Right-click the table you want and select
Alter from the pop-up menu. The Alter Table notebook opens.
2. On the Check Constraints page, click Add.
3. On the Add Check Constraint window, complete the necessary information.
To define dimensions using the command line, use the ADD CONSTRAINT option
of the ALTER TABLE statement. For example, the following SQL statement adds a
unique constraint to the EMPLOYEE table that represents a new way to uniquely
identify employees in the table:
ALTER TABLE EMPLOYEE
ADD CONSTRAINT NEWID UNIQUE(EMPNO,HIREDATE)
Related tasks:
v “Defining a unique constraint on a table” on page 223
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
When a table check constraint is added, packages and cached dynamic SQL that
insert or update the table might be marked as invalid.
Procedure:
1. Expand the object tree until you see the Tables folder.
2. Right-click on the table you want to modify, and select Alter from the pop-up menu.
3. On the Constraints page, click Add.
4. On the Add Check Constraint window, complete the information.
The following SQL statement adds a constraint to the EMPLOYEE table that the
salary plus commission of each employee must be more than $25,000:
ALTER TABLE EMPLOYEE
ADD CONSTRAINT REVENUE CHECK (SALARY + COMM > 25000)
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “SET INTEGRITY statement” in SQL Reference, Volume 2
Prerequisites:
To change check constraints, you must have at least one of the following privileges
on the table to be altered:
v ALTER privilege
v CONTROL privilege
Note: To change the definition of a existing column (in a database that is Version
8.2 or greater), you must have DBADM authority.
Procedure:
1. Open the Change Check Constraint window: From the Control Center, expand the
object tree until you find the Tables folder. Right-click the Tables folder and select
Create from the pop-up menu. The Create Table wizard opens. On the Constraints
page, select a constraint in the table and click Change. The Change Check Constraint
window opens.
2. Specify the check condition for the constraint that you are changing.
3. Optional: Type a name for the check constraint.
4. Optional: Type a comment to document the check constraint.
To change check constraints using the command line, use the ALTER TABLE
statement.
Related tasks:
v “Adding check constraints” on page 229
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
Dropping this unique constraint invalidates any packages or cached dynamic SQL
that used the constraint.
Procedure:
1. Open the Alter Table notebook: From the Control Center, expand the object tree until
you find the Tables folder. Click the Tables folder. Any existing tables are displayed in
the pane on the right side of the window. Right-click the table you want and select
Alter from the pop-up menu. The Alter Table notebook opens.
2. On the Check Constraints page, select the unique constraints that you want to drop,
and select Drop.
To drop a unique constraint using the command line, use the ALTER TABLE
statement. The following SQL statement drops the unique constraint NEWID from
the EMPLOYEE table:
ALTER TABLE EMPLOYEE
DROP UNIQUE NEWID
1. Expand the object tree until you see the Tables folder.
2. Right-click on the table you want to modify, and select Alter from the pop-up menu.
3. On the Keys page, select the primary keys to drop.
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related tasks:
v “Dropping foreign keys” on page 316
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
1. Expand the object tree until you see the Tables folder.
2. Right-click on the table you want to modify, and select Alter from the pop-up menu.
3. On the Keys page, click Add.
4. Select the foreign keys at right to drop.
The following examples use the DROP PRIMARY KEY and DROP FOREIGN KEY
clauses in the ALTER TABLE statement to drop primary keys and foreign keys on
a table:
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related tasks:
v “Dropping primary keys” on page 316
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
You can explicitly drop or change a table check constraint using the ALTER TABLE
statement, or implicitly drop it as the result of a DROP TABLE statement.
When you drop a table check constraint, all packages and cached dynamic SQL
statements with INSERT or UPDATE dependencies on the table are invalidated.
The name of all check constraints on a table can be found in the SYSCAT.CHECKS
catalog view. Before attempting to drop a table check constraint having a
system-generated name, look for the name in the SYSCAT.CHECKS catalog view.
1. Expand the object tree until you see the Tables folder.
2. Right-click on the table you want to modify, and select Alter from the pop-up menu.
3. On the Constraints page, select the check constraint to drop, click Remove.
The following SQL statement drops the table check constraint REVENUE from the
EMPLOYEE table:
ALTER TABLE EMPLOYEE
DROP CHECK REVENUE
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related tasks:
v “Adding a table check constraint” on page 314
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
You can only change a distribution key on tables in a single database partition.
First drop the existing distribution key, and then create another.
Procedure:
1. Open the Alter Table notebook: From the Control Center, expand the object tree until
you find the Tables folder. Click the Tables folder. Any existing tables are displayed in
the pane on the right side of the window. Right-click the table you want and select
Alter from the pop-up menu. The Alter Table notebook opens.
2. On the Keys page, select a distribution key in the table and click Change. The Change
Distribution Key window opens.
3. Select the columns that you want to add as distribution key columns and move them to
the Selected columns box.
To change distribution keys using the command line, use the DROP
DISTRIBUTION option of the ALTER TABLE statement. For example, the
following SQL statement drops the distribution key MIX_INT from the MIXREC
table:
ALTER TABLE MIXREC
DROP DISTRIBUTION
You cannot change the distribution key of a table spanning multiple database
partitions. If you try to drop it, an error is returned.
Neither of these methods are practical for large databases; it is therefore essential
that you define the appropriate distribution key before implementing the design of
large databases.
Related concepts:
v “Distribution keys” in Administration Guide: Planning
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
Modify the attributes of an existing identity column with the ALTER TABLE
statement.
There are several ways to modify an identity column so that it has some of the
characteristics of sequences.
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
Altering a sequence
Procedure:
There are two tasks that are not found as part of the creation of the sequence. They
are:
v RESTART. Resets the sequence to the value specified implicitly or explicitly as
the starting value when the sequence was created.
v RESTART WITH <numeric-constant>. Resets the sequence to the exact numeric
constant value. The numeric constant can be any positive or negative value with
no non-zero digits to the right of any decimal point.
The data type of a sequence cannot be changed. Instead, you must drop the
current sequence and then create a new sequence specifying the new data type.
All cached sequence values not used by DB2 are lost when a sequence is altered.
Related tasks:
v “Dropping a sequence” on page 320
Related reference:
v “ALTER SEQUENCE statement” in SQL Reference, Volume 2
where the sequence_name is the name of the sequence to be dropped and includes
the implicit or explicit schema name to exactly identify an existing sequence.
Sequences that are system-created for IDENTITY columns cannot be dropped using
the DROP SEQUENCE statement.
Once a sequence is dropped, all privileges on the sequence are also dropped.
Related tasks:
v “Altering a sequence” on page 319
Related reference:
v “DROP statement” in SQL Reference, Volume 2
Procedure:
1. Open the Alter Table notebook: From the Control Center, expand the object tree until
you find the Tables folder. Click the Tables folder. Any existing tables are displayed in
the pane on the right side of the window. Right-click the table you want and select
Alter from the pop-up menu. The Alter Table notebook opens.
2. On the columns page, select the columns that you want to drop and click Remove. If
you change you mind before clicking OK, you can click Undo remove.
To define dimensions using the command line, use the ADD CONSTRAINT option
of the ALTER TABLE statement. For example, the following SQL statement adds a
unique constraint to the EMPLOYEE table that represents a new way to uniquely
identify employees in the table:
ALTER TABLE EMPLOYEE
ADD CONSTRAINT NEWID UNIQUE(EMPNO,HIREDATE)
Related tasks:
v “Adding columns to an existing table” on page 304
Prerequisites:
Generated columns might only be defined on data types for which an equal
comparison is defined. The excluded data types for the generated columns include:
Structured types, LOBs, CLOBs, DBCLOBs, LONG VARCHAR, LONG
VARGRAPHIC, and user-defined types defined using the same excluded data
types.
Restrictions:
The db2look utility will not see the check constraints generated by a generated
column.
When using replication, the target table must not use generated columns in its
mapping. There are two choices when replicating:
v The target table must define the generated column as a normal column; that is,
not a generated column
v The target table must omit the generated column in the mapping
Procedure:
If this SET INTEGRITY statement fails because of a lack of log space, then
increase the available active log space and reissue the SET INTEGRITY
statement.
The values for generated columns can also simply be checked by applying the
expression as if it is an equality check constraint:
SET INTEGRITY FOR t1 IMMEDIATE CHECKED
If values have been placed in a generated column using LOAD for example, and
you know that the values match the generated expression, then the table can be
taken out of the set integrity pending state without checking or assigning the
values:
SET INTEGRITY FOR t1 GENERATED COLUMN IMMEDIATE UNCHECKED
Related tasks:
v “Defining a generated column on a new table” on page 219
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “COMMIT statement” in SQL Reference, Volume 2
v “LOCK TABLE statement” in SQL Reference, Volume 2
v “SET INTEGRITY statement” in SQL Reference, Volume 2
v “UPDATE statement” in SQL Reference, Volume 2
v “db2look - DB2 statistics and DDL extraction tool command” in Command
Reference
v “Restrictions on native XML data store” in XML Guide
Procedure:
1. Expand the object tree until you see the Tables folder.
2. Right-click on the table you want to modify, and select Alter from the pop-up menu.
3. On the Table page, select the Cardinality varies significantly at run time check box,
and click Ok.
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
You might find that you need to change in one or more of the following ways
within your table:
v Rename columns
v Remove columns
v Alter column type and transform existing data using SQL scalar functions
v Increase or decrease column size
v Change column default value
v Change column from NOT NULL to NULLABLE
v Change precision and scale for decimal
When making these types of changes, you need to minimize the risk of losing the
original table data. The DB2 database manager provides a user interface, and a
stored procedure, that will allow you to alter a table. The original table and its
associated data are not dropped until you explicitly indicate that all of the alter
table work has been completed.
Each stored procedure call that is invoked from the user interface carries out a
sequence of actions such as dropping, recreating, and loading data to accomplish
the actions listed above.
There are limitations on what can be altered in the table. These limitations include:
There are several component pieces that make up the available options when using
the stored procedure that carry out the ALTER TABLE actions. These pieces
include:
v ALTER_OBJ(’GENERATE’,’<sql statement>, 0, ?)
This procedure generates all of the SQL statements and places them into a
metadata table.
Note: In generate mode, the SQL statement parameter cannot be null; and, if an
alter ID is provided, it is ignored.
v ALTER_OBJ(’VALIDATE’,NULL,123,?)
This procedure verifies the SQL generated but does not include the movement of
data. The running of the scripts to test validity takes place under the given user
ID “123”. The results of the verification are placed in the Meta table (which also
holds the other information from the table being altered).
v ALTER_OBJ(’APPLY_CONTINUE_ON_ERROR’,NULL,123,?)
This procedure runs all of the SQL statements under the given ID, and writes
the results into the Meta table. The SQL statements would include how to build
the new table, the building of any dependent objects, and the populating of the
new table.
You can get the old definitions back using the UNDO mode (see below).
A warning SQLCODE is set for the stored procedure in the SQLCA; and the
transactions in the stored procedure are finished.
v ALTER_OBJ(’APPLY_STOP_ON_ERROR’,NULL,123,?)
This procedure runs each of the SQL statements one-by-one under the given ID,
and stops when any errors are encountered.
An error SQLCODE is set for the stored procedure in the SQLCA; and the
transactions in the stored procedure are automatically rolled back.
Note: This mode can only be called separately from all other modes.
Related concepts:
v “Using the ALTER TABLE statement to alter columns of a table” on page 300
Related tasks:
v “Altering a table” on page 297
Related reference:
v “Supported functions and administrative SQL routines and views” in SQL
Reference, Volume 1
v “ALTOBJ procedure” in Administrative SQL Routines and Views
Modifying indexes
This section describes how to modify indexes.
Prerequisites:
Restrictions:
The existing table or index to be renamed must not be the name of a catalog table
or index, a summary table or index, a typed table, a declared global temporary
table, a nickname, or an object other than a table, a view, or an alias.
Also, there must be no check constraints within the table nor any generated
columns other than the identity column. Any packages or cached dynamic SQL or
You should consider checking the appropriate system catalog tables to ensure that
the table or index being renamed is not affected by any of these restrictions.
Procedure:
1. Expand the object tree until you see the Tables or Views folder.
2. Right-click on the table or view you want to rename, and select Rename from the
pop-up menu.
3. Type the new table or view name, and click Ok.
The SQL statement below renames the EMPLOYEE table within the COMPANY
schema to EMPL:
RENAME TABLE COMPANY.EMPLOYEE TO EMPL
The SQL statement below renames the EMPIND index within the COMPANY
schema to MSTRIND:
RENAME INDEX COMPANY.EMPIND TO MSTRIND
Packages are invalidated and must be rebound if they refer to a table or index that
has just been renamed. The packages are implicitly rebound regardless of whether
another index exists with the same name. Unless a better choice exists, the package
will use the same index it had before, under its new name.
Related reference:
v “RENAME statement” in SQL Reference, Volume 2
You cannot change any clause of an index definition, index extension, or index
specification; you must drop the index or index extension and create it again.
(Dropping an index or an index specification does not cause any other objects to be
dropped but might cause some packages to be invalidated.)
The name of the index extension must identify an index extension described in the
catalog. The RESTRICT clause enforces the rule that no index can be defined that
depends on the index extension definition. If an underlying index depends on this
index extension, then the drop fails.
Procedure:
1. Expand the object tree until you see the Indexes folder.
2. Right-click on the index you want to drop, and select Drop from the pop-up menu.
3. Check the Confirmation box, and click Ok.
The following SQL statement drops the index extension called IX_MAP:
DROP INDEX EXTENSION ix_map RESTRICT
Any packages and cached dynamic SQL and XQuery statements that depend on
the dropped indexes are marked invalid. The application program is not affected
by changes resulting from adding or dropping indexes.
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
Modifying triggers
This section describes how to modify triggers.
For example, you could use the following SQL statements to create a view:
Due to the join in EMPV view’s body, the view to update data in the underlying
tables cannot be used until the following statements are added:
CREATE TRIGGER EMPV_INSERT INSTEAD OF INSERT ON EMPV
REFERENCING NEW AS NEWEMP DEFAULTS NULL FOR EACH ROW
INSERT INTO EMPLOYEE (EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT,
PHONENO, HIREDATE)
VALUES(EMPNO, FIRSTNME, MIDINIT, LASTNAME,
COALESCE((SELECT DEPTNO FROM DEPARTMENT AS D
WHERE D.DEPTNAME = NEWEMP.DEPTNAME),
RAISE_ERROR(’70001’, ’Unknown department name’)),
PHONENO, HIREDATE)
This CREATE TRIGGER statement will allow INSERT requests against EMPV view
to be carried out.
CREATE TRIGGER EMPV_DELETE INSTEAD OF DELETE ON EMPV
REFERENCING OLD AS OLDEMP FOR EACH ROW
DELETE FROM EMPLOYEE AS E WHERE E.EMPNO = OLDEMP.EMPNO
This CREATE TRIGGER statement will allow DELETE requests against EMPV
view to be carried out.
CREATE TRIGGER EMPV_UPDATE INSTEAD OF UPDATE ON EMPV
REFERENCING NEW AS NEWEMP
OLD AS OLDEMP
DEFAULTS NULL FOR EACH ROW
BEGIN ATOMIC
VALUES(CASE WHEN NEWEMP.EMPNO = OLDEMP.EMPNO THEN 0
ELSE RAISE_ERROR(’70002’, ’Must not change EMPNO’) END);
UPDATE EMPLOYEE AS E
SET (FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE) =
(NEWEMP.FIRSTNME, NEWEMP.MIDINIT, NEWEMP.LASTNAME,
COALESCE((SELECT DEPTNO FROM DEPARTMENT AS D
WHERE D.DEPTNAME = NEWEMP.DEPTNAME),
RAISE_ERROR(’70001’, ’Unknown department name’)),
NEWEMP.PHONENO, NEWEMP.HIREDATE)
WHERE NEWEMP.EMPNO = E.EMPNO;
END
This CREATE TRIGGER statement will allow UPDATE requests against EMPV
view to be carried out.
Related tasks:
v “Creating triggers” on page 240
Related reference:
v “CREATE TRIGGER statement” in SQL Reference, Volume 2
Dropping a trigger
A trigger object can be dropped using the DROP statement, but this procedure will
cause dependent packages to be marked invalid, as follows:
v If an update trigger without an explicit column list is dropped, then packages
with an update usage on the target table are invalidated.
Related tasks:
v “Creating triggers” on page 240
Related reference:
v “DROP statement” in SQL Reference, Volume 2
Prerequisites:
When altering the view, the scope must be added to an existing reference type
column that does not already have a scope defined. Further, the column must not
be inherited from a superview.
Restrictions:
Changes you make to the underlying content of a view require that you use
triggers. Other changes to a view require that you drop and then re-create the
view.
Procedure:
The data type of the column-name in the ALTER VIEW statement must be REF
(type of the typed table name or typed view name). You can also modify the
contents of a view through INSTEAD OF triggers.
Other database objects such as tables and indexes are not affected although
packages and cached dynamic statements are marked invalid.
1. Expand the object tree until you see the Views folder.
2. Right-click on the view you want to modify, and select Alter from the pop-up menu.
3. In the Alter view window, enter or modify a comment, and click Ok.
1. Expand the object tree until you see the Views folder.
2. Right-click on the view you want to drop, and select Drop from the pop-up menu.
3. Check the Confirmation box, and click Ok.
Any views that are dependent on the view being dropped will be made
inoperative.
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related tasks:
v “Creating triggers” on page 240
v “Creating a view” on page 251
v “Recovering inoperative views” on page 331
Related reference:
v “ALTER VIEW statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
If you do not want to recover an inoperative view, you can explicitly drop it with
the DROP VIEW statement, or you can create a new view with the same name but
a different definition.
Related tasks:
v “Altering or dropping a view” on page 330
Related reference:
v “CREATE VIEW statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
v “GRANT (Table, View, or Nickname Privileges) statement” in SQL Reference,
Volume 2
v “SYSCAT.VIEWS catalog view” in SQL Reference, Volume 1
Dropping aliases
When you drop an alias, its description is deleted from the catalog, any packages
and cached dynamic queries that reference the alias are invalidated, and all views
and triggers dependent on the alias are marked inoperative.
Prerequisites:
To drop an alias, you must be defined to DB2 as the creator of the alias, or you
must have one of the following authorizations:
v SYSADM authority
v DBADM authority on the database in which the alias is stored
v The DROPIN privilege on the alias’s schema
Procedure:
1. Expand the object tree until you find the Alias folder below the database that contains
the alias that you want to drop. Click on the Alias folder. Any existing aliases are
displayed in the pane on the right side of the window. Right-click the alias that you
want to drop and select Drop from the pop-up menu. The Confirmation window
opens.
2. Confirm the drop request.
To drop aliases using the command line, use the DROP statement.
Related reference:
After creating a structured type, you might find that you need to add or drop
attributes associated with that structured type. This is done using the ALTER TYPE
(Structured) statement.
Related concepts:
v “Structured type hierarchies” in Developing SQL and External Routines
v “User-defined structured types” in Developing SQL and External Routines
Related tasks:
v “Creating structured types” in Developing SQL and External Routines
Related reference:
v “ALTER TYPE (Structured) statement” in SQL Reference, Volume 2
Prerequisites:
Restrictions:
A UDF cannot be dropped if a view, trigger, table check constraint, or another UDF
is dependent on it. Functions implicitly generated by the CREATE DISTINCT TYPE
statement cannot be dropped. It is not possible to drop a function that is in either
the SYSIBM schema or the SYSFUN schema.
Procedure:
You can disable a function mapping with the mapping option DISABLE.
Packages which are marked inoperative are not implicitly rebound. The package
must either be rebound using the BIND or REBIND commands or it must be
prepared by use of the PREP command. Dropping a UDF invalidates any packages
or cached dynamic SQL statements that used it.
Related reference:
v “DROP statement” in SQL Reference, Volume 2
v “BIND command” in Command Reference
v “PRECOMPILE command” in Command Reference
v “REBIND command” in Command Reference
Restrictions:
You cannot drop a default type mapping; you can only override it by creating
another type mapping.
The database manager attempts to drop all functions that are dependent on this
distinct type. If the UDF cannot be dropped, the UDT cannot be dropped. A UDF
cannot be dropped if a view, trigger, table check constraint, or another UDF is
dependent on it. Dropping a UDT invalidates any packages or cached dynamic
SQL statements that used it.
Note that only transforms defined by you or other application developers can be
dropped; built-in transforms and their associated group definitions cannot be
dropped.
Procedure:
If you have created a transform for a UDT, and you are planning to drop the UDT,
you should consider if it is necessary to drop the transform. This is done through
the DROP TRANSFORM statement.
Related concepts:
v “User-defined types (UDTs)” on page 246
Related tasks:
v “Creating a type mapping in a federated system” on page 248
v “Creating a user-defined distinct type” on page 247
Related reference:
v “DROP statement” in SQL Reference, Volume 2
Once a regular table has been altered to a materialized query table, the table is
placed in a set integrity pending state. When altering in this way, the fullselect
in the materialized query table definition must match the original table definition,
that is:
v The number of columns must be the same.
v The column names and positions must match.
v The data types must be identical.
If the materialized query table is defined on an original table, then the original
table cannot itself be altered into a materialized query table. If the original table
has triggers, check constraints, referential constraints, or a defined unique index,
then it cannot be altered into a materialized query table. If altering the table
properties to define a materialized query table, you are not allowed to alter the
table in any other way in the same ALTER TABLE statement.
When altering a regular table into a materialized query table, the fullselect of the
materialized query table definition cannot reference the original table directly or
indirectly through views, aliases, or materialized query tables.
Procedure:
The restrictions on the fullselect when altering the regular table to a materialized
query table are very much like the restrictions when creating a summary table
using the CREATE SUMMARY TABLE statement.
Related tasks:
v “Creating a materialized query table” on page 201
v “Dropping a materialized query or staging table” on page 365
v “Refreshing the data in a materialized query table” on page 336
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
You can refresh the data in one or more materialized query tables by using the
REFRESH TABLE statement. The statement can be embedded in an application
program, or issued dynamically. To use this statement, you must have either
SYSADM or DBADM authority, or CONTROL privilege on the table to be
refreshed.
The following example shows how to refresh the data in a materialized query
table:
REFRESH TABLE SUMTAB1
Related tasks:
v “Altering materialized query table properties” on page 335
v “Creating a materialized query table” on page 201
Related reference:
v “REFRESH TABLE statement” in SQL Reference, Volume 2
Prerequisites:
To alter a partitioned table to detach a data partition the user must have the
following authorities or privileges:
v The user performing the DETACH operation must have the authority needed to
ALTER, to SELECT from, and to DELETE from the source table.
v The user must also have the authority needed to create the target table.
Therefore, to alter a table to detach a data partition, the privilege held by the
authorization ID of the statement must include at least one of the following
authorities or privileges on the target table:
– SYSADM or DBADM authority
– CREATETAB authority on the database and USE privilege on the table spaces
used by the table as well as one of:
- IMPLICIT_SCHEMA authority on the database, if the implicit or explicit
schema name of the table does not exist
- CREATEIN privilege on the schema, if the schema name of the table refers
to an existing schema.
To alter a partitioned table to attach a data partition, the privileges held by the
authorization ID of the statement must include at least one of the following
authorities or privileges on the source table:
To alter a partitioned table to add a data partition, the privileges held by the
authorization ID of the statement must have privileges to use the table space
where the new partition is added, and include at least one of the following
authorities or privileges on the source table:
v ALTER privilege
v CONTROL privilege
v SYSADM
v DBADM
v ALTERIN privilege on the table schema
Usage guidelines:
v Each ALTER TABLE statement issued with the PARTITION clause must be in a
separate SQL statement.
v No other ALTER operations are permitted in an SQL statement containing an
ALTER TABLE...PARTITION operation. For example, you cannot attach a data
partition and add a column to the table in a single SQL statement.
v Multiple ALTER statements can be executed, followed by a single SET
INTEGRITY statement.
Procedure:
You can alter a table from the DB2 Control Center or from the DB2 command line
processor (CLP).
1. Expand the Table folder. The table objects are displayed in the contents pane of the DB2
Control Center window.
2. Right-click the table that you want to alter and select Open Data Partitions from the list
of actions.
3. In the Open Data Partitions window select the button associated with your task. If you
are adding, the Add Data Partition window opens. If you are attaching, the Attach Data
Partition window opens. If you are detaching the Detach Data Partition window opens.
4. Specify the required fields.
To use the DB2 command line to alter a partitioned table, issue the ALTER TABLE
statement.
Related concepts:
v “Attributes of detached data partitions” on page 354
v “Data partitions” in Administration Guide: Planning
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering a table” on page 297
v “Dropping a data partition” on page 358
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “Command Line Processor (CLP) samples” in Samples Topics
Adding a column:
When adding a column to a table with attached data partitions, the column is also
added to the attached data partitions. When adding a column to a table with
detached data partitions, the column is not added to the detached data partitions,
because the detached data partitions are no longer physically associated to the
table.
Altering a column:
When altering a column in a table with attached data partitions, the column will
also be altered on the attached data partitions. When altering a column in a table
with detached data partitions, the column is not altered on the detached data
partitions, because the detached data partitions are no longer physically associated
to the table.
The following table attributes are also stored in a data partition. Changes to these
attributes are reflected on the attached data partitions, but not on the detached
data partitions.
v DATA CAPTURE
v VALUE COMPRESSION
v APPEND
v COMPACT/LOGGED FOR LOB COLUMNS
v ACTIVATE NOT LOGGED INITIALLY (WITH EMPTY TABLE)
Related concepts:
v “Attributes of detached data partitions” on page 354
v “Understanding clustering index behavior on partitioned tables” in Performance
Guide
v “Data partitions” in Administration Guide: Planning
v “Understanding index behavior on partitioned tables” in Performance Guide
v “Large object behavior in partitioned tables” in SQL Reference, Volume 1
v “Partitioned materialized query table behavior” on page 206
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering partitioned tables” on page 336
v “Altering a table” on page 297
v “Dropping a data partition” on page 358
v “Attaching a data partition” on page 346
v “Detaching a data partition” on page 352
v “Rotating data in a partitioned table” on page 339
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “SET INTEGRITY statement” in SQL Reference, Volume 2
To detach a data partition from a partitioned table the user must have the
following authorities or privileges:
v The user performing the DETACH operation must have the authority needed to
ALTER, to SELECT from, and to DELETE from the source table.
v The user must also have the authority needed to CREATE the target table.
Therefore, to alter a table to detach a data partition, the privilege held by the
authorization ID of the statement must include at least one of the following
authorities or privileges on the target table:
– SYSADM or DBADM authority
– CREATETAB authority on the database and USE privilege on the table spaces
used by the table as well as one of:
- IMPLICIT_SCHEMA authority on the database, if the implicit or explicit
schema name of the table does not exist
- CREATEIN privilege on the schema, if the schema name of the table refers
to an existing schema.
To alter a table to attach a data partition, the user must have the following
authorities or privileges:
v The user performing the attach must have the authority needed to ALTER and
to INSERT into the target table
v The user must also be able to SELECT from and to DROP the source table.
Therefore, to alter a table to attach a data partition, the privileges held by the
authorization ID of the statement must include at least one of the following on
the source table:
– SELECT privilege on the source table and DROPIN privilege on the schema
of the source table
– CONTROL privilege on the source table
– SYSADM or DBADM authority.
Procedure:
You can rotate data in a partitioned table from the DB2 Control Center or from the
DB2 command line processor (CLP).
1. Expand the Table folder. The table objects are displayed in the contents pane of the DB2
Control Center window.
2. Right-click the table that you want to alter, and select Open Data Partitions from the list
of actions.
3. In the Open Data Partitions window, click the button associated with your task. If you
are attaching, the Attach Data Partition window opens. If you are detaching, the Detach
Data Partition window opens.
4. Specify the required fields.
To use the DB2 command line to rotate data in a partitioned table, issue the
ALTER TABLE statement.
Example:
Note:If there are detached dependents, then you must run the SET INTEGRITY
statement on the detached dependents before you can load the detached table.
3. If desired, perform data cleansing. Data cleansing activities include:
v Filling in missing values
v Deleting inconsistent and incomplete data
v Removing redundant data arriving from multiple sources
v Transforming data
– Normalization (Data from different sources that represents the same value
in different ways must be reconciled as part of rolling the data into the
warehouse.)
– Aggregation (Raw data that is too detailed to store in the warehouse must
be pre-aggregated during roll-in.)
4. Attach the new data as a new range.
ALTER TABLE stock ATTACH PARTITION dec03
STARTING ’12/01/2003’ ENDING ’12/31/2003’
FROM newtable;
Attaching a data partition drains queries and invalidates packages.
5. Use the SET INTEGRITY statement to update indexes and other dependent
objects. Read and write access is permitted during the execution of the SET
INTEGRITY statement.
SET INTEGRITY FOR stock ALLOW WRITE ACCESS
IMMEDIATE CHECKED FOR EXCEPTION IN stock USE stock_ex;
Related concepts:
v “Attributes of detached data partitions” on page 354
v “Data partitions” in Administration Guide: Planning
v “Partitioned materialized query table behavior” on page 206
v “Optimization strategies for partitioned tables” in Performance Guide
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering partitioned tables” on page 336
v “Altering a table” on page 297
v “Dropping a data partition” on page 358
v “Attaching a data partition” on page 346
v “Detaching a data partition” on page 352
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “ALTER TABLE statement” in SQL Reference, Volume 2
To accelerate the DETACH operation, index cleanup on the source table is done
automatically through a background asynchronous index cleanup process. If there
are no detached dependents defined on the source table, there is no need to issue
the SET INTEGRITY statement to complete the DETACH operation.
Instead of dropping the table as described in the previous example, it also possible
to attach the table to another table, or truncate it and use it as a table to load new
data into before reattaching it. You can perform these operations immediately, even
before the asynchronous index cleanup has completed, except where the stock table
has detached dependents.
Rolling in data:
The following example illustrates the steps to load data into a non-partitioned
table and then add that data partition to the rest of the table.
This ALTER TABLE ...ADD operation drains queries running against the stock
table and invalidate packages. That is, existing queries complete normally before
the ADD operation continues. Once the ADD operation is issued, any new queries
accessing the stock table block on a lock.
Use the SET INTEGRITY statement to validate constraints and refresh dependent
materialized query tables (MQTs):
SET INTEGRITY FOR stock ALLOW READ
ACCESS IMMEDIATE CHECKED FOR EXCEPTION IN stock USE stock_ex;
COMMIT WORK;
Before rolling in your table data, data cleansing might be required before the data
is attached. Data cleansing activities include:
v Filling in missing values
v Deleting inconsistent and incomplete data
v Removing redundant data arriving from multiple sources
v Transforming data
– Normalization (Data from different sources that represents the same values in
different ways must be reconciled as part of rolling the data into the
warehouse.)
– Aggregation (Raw data that is too detailed to store in the warehouse, must be
preaggregated during roll-in.
During an ATTACH operation, one or both of the STARTING and ENDING clauses
must be supplied and the lower bound (STARTING) must be less than or equal to
the upper bound (ENDING). In addition, the newly attached data partition must
not overlap with an existing data partition range in the target table. If the highest
range has been defined as MAXVALUE, then any attempt to attach a new high
range fails because it overlaps the existing high range. This restriction also applies
to MINVALUE. You cannot add or attach a new data partition in the middle unless
it falls in an existing gap in the ranges. Boundaries not specified by the user are
determined when the table is created.
The ALTER TABLE ...ATTACH operation drains all queries and invalidate
packages dependent on the stock table. That is, existing queries complete normally
before the ATTACH operation continues. Once the ATTACH operation is issued,
any new queries accessing the stock table block on a lock. The stock table is
z-locked (completely inaccessible) during this transition. The data in the attached
data partition is not yet visible, because it has not yet been validated by the SET
INTEGRITY statement. Tip:Issue a COMMIT WORK statement immediately after
the ATTACH operation to make the table available for use.
COMMIT WORK;
The SET INTEGRITY statement is necessary to verify that the newly attached data
is in range. It also does any necessary maintenance of indexes and other dependent
344 Administration Guide: Implementation
objects such as MQTs. Until the SET INTEGRITY statement is committed, the new
data is not visible. The existing data in the stock table is fully accessible for both
reading and writing if online SET INTEGRITY is used. The default while SET
INTEGRITY is running is ALLOW NO ACCESS mode.
Read/write
Table Z-lock requested by alter Partition Z-lock and catalog locks requested by SI
Lock granted Locks granted
Alter completes SI completes
Lock released Locks released
Figure 4. This figure demonstrates the stages of data availability during an ATTACH
operation.
Note: While SET INTEGRITY is running, you cannot execute DDL or utility type
operations on the table. The operations include but are not restricted to
LOAD, REORG, REDISTRIBUTE, ALTER TABLE (for example, add columns,
ADD, ATTACH, DETACH, TRUNCATE using ALTER to ″not logged
initially″), and INDEX CREATE.
SET INTEGRITY FOR stock ALLOW WRITE ACCESS
IMMEDIATE CHECKED FOR EXCEPTION IN stock USE stock_ex;
Set integrity validates the data in the newly attached data partition.
Next, commit the transaction to make the table available for use.
COMMIT WORK;
Any rows that are out of range, or violate other constraints, are moved to the
exception table stock_ex. You can query stock_ex to inspect the violating rows, and
possibly to clean them up and re-insert them into the table.
Related concepts:
v “Data partitions” in Administration Guide: Planning
v “Asynchronous index cleanup” in Performance Guide
v “Attributes of detached data partitions” on page 354
v “Optimization strategies for partitioned tables” in Performance Guide
v “Partitioned materialized query table behavior” on page 206
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering a table” on page 297
v “Altering partitioned tables” on page 336
v “Approaches to defining ranges on partitioned tables” on page 195
Related reference:
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
v “LOAD command” in Command Reference
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “SET INTEGRITY statement” in SQL Reference, Volume 2
v “Command Line Processor (CLP) samples” in Samples Topics
The ATTACH PARTITION clause takes an existing table (source table) and attaches
it as a new data partition to the target table. The newly attached data partition is
initially inaccessible to queries. The remainder of the table remains online. A call to
the SET INTEGRITY statement is required to bring the attached data partition
online.
Prerequisites:
To alter a table to attach a data partition, the privileges held by the authorization
ID of the statement must include at least one of the following authorities and
privileges on the source table:
v SELECT privilege on the source table and DROPIN privilege on the schema of
the source table
v CONTROL privilege on the source table
v SYSADM or DBADM authority.
The following conditions must be met before you can attach a data partition:
v The table to which you want to attach the new data partition (that is, the target
table) must be an existing partitioned table.
v The source table must be an existing non-partitioned table or a partitioned table
with only a single data partition, and with no ATTACHED or DETACHED data
partitions. To attach multiple data partitions, it is necessary to issue multiple
ATTACH statements.
v The source table cannot be hierarchical (typed table).
v The source table cannot be a range-clustered table (RCT).
v The table definition for a source table must match the target table.
v The number, type, and ordering of columns must match for the source and
target tables.
Procedure:
You can alter a table from the DB2 Control Center or the DB2 command line
processor (CLP).
1. Expand the Table folder. The table objects are displayed in the contents pane of the DB2
Control Center window.
2. Right-click on the table that you want to modify and select Open Data Partitions from
the list of options.
3. In the Open Data Partitions window click the Attach button.
4. In the Attach Data Partition window specify the name, and boundary specifications of
the data partition to attach.
5. In the Open Data Partitions window click OK to modify the table.
To use the DB2 command line to alter a partitioned table and to attach a data
partition to the table, issue the ALTER TABLE statement
Related concepts:
v “Data partitions” in Administration Guide: Planning
v “Partitioned tables” in Administration Guide: Planning
v “Attributes of detached data partitions” on page 354
v “Resolving a mismatch when trying to attach a data partition to a partitioned
table” on page 348
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering a table” on page 297
v “Altering partitioned tables” on page 336
v “Creating a new source table using db2look” on page 210
v “Detaching a data partition” on page 352
v “Dropping a data partition” on page 358
v “Rotating data in a partitioned table” on page 339
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “SYSCAT.COLUMNS catalog view” in SQL Reference, Volume 1
To help you prevent a mismatch from occuring, refer to the Restrictions and usage
guidelines section of Attaching a data partition. The section outlines conditions
that must be met before you can successfully attach a data partition. Failure to
meet the listed conditions returns error SQL20408N or SQL20307N.
The following sections describe the various types of mismatches that can occur and
provides the suggested steps to achieve agreement between tables:
The APPEND mode of the tables does not match. (SQL20307N reason code 3):
The code pages of the source and target table do not match. (SQL20307N reason
code 4):
The source table is a partitioned table with more than one data partition or with
attached or detached data partitions. (SQL20307N reason code 5):
Detach data partitions from the source table until there is a single visible data
partition using the statement:
ALTER TABLE ... DETACH PARTITION
Include any necessary SET INTEGRITY statements. If the source table has indexes,
you might not be able to attach the source table immediately. Detached data
partitions remain detached until all indexes are cleaned-up of detached keys. If
you want to perform an attach immediately, drop the index on the source table.
Otherwise, create a new source.
The target and source table are the same. (SQL20307N reason code 7):
You cannot attach a table to itself. Determine the correct table to use as the source
or target table.
The NOT LOGGED INITIALLY clause was specified for either the source table
or the target table, but not for both. (SQL20307N reason code 8):
Either make the table that is not logged initially be logged by issuing the COMMIT
statement, or make the table that is logged be not logged initially by entering the
statement:
ALTER TABLE .... ACTIVATE NOT LOGGED INITIALLY
The DATA CAPTURE CHANGES clause was specified for either the source
table or the target table, but not both. (SQL20307N reason code 9):
To enable data capture changes on the table that does not have data capture
changes turned on, run the following statement:
ALTER TABLE ... DATA CAPTURE CHANGES
To disable data capture changes on the table that does have data capture changes
turned on, run the statement:
ALTER TABLE ... DATA CAPTURE NONE
The distribution clauses of the tables do not match. The distribution key must
be the same for the source table and the target table. (SQL20307N reason code
10):
It is recommended that you create a new source table. You cannot change the
distribution key of a table spanning multiple database partitions. To change a
distribution key on tables in single-partition database, run the following
statements:
ALTER TABLE ... DROP DISTRIBUTION;
ALTER TABLE ... ADD DISTRIBUTION(key-specification)
The data type of the columns (TYPENAME) does not match. (SQL20408N reason
code 1):
The nullability of the columns (NULLS) does not match. (SQL20408N reason
code 2):
Create a new source table. Implicit defaults must match exactly if both the target
table column and source table column have implicit defaults (if IMPLICITVALUE
is not NULL).
To alter the system compression of the column issue one of the following
statements to correct the mismatch:
ALTER TABLE ... ALTER COLUMN ... COMPRESS SYSTEM DEFAULT
or
ALTER TABLE ... ALTER COLUMN ... COMPRESS OFF
Related concepts:
v “Data partitions” in Administration Guide: Planning
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering partitioned tables” on page 336
v “Creating partitioned tables” on page 193
v “Approaches to migrating existing tables and views to partitioned tables” on
page 198
v “Attaching a data partition” on page 346
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “SYSCAT.COLUMNS catalog view” in SQL Reference, Volume 1
Rolling-out partitioned table data allows you to easily separate ranges of data from
a partitioned table. Once a data partition is detached into a separate table, the table
can be handled in several ways. You can drop the seperate table (whereby, the data
from the data partition is destroyed); archive it or otherwise use it as a seperate
table; attach it to another partitoned table such as a history table; or you can
manipulate, cleanse, transform and reattach to the original or some other
partitioned table.
If the source table is a multidimensional clustered table (MDC), access to the newly
detached table is not allowed in the same unit of work as the ALTER TABLE
...DETACH operation. Block indexes are created upon first access to the table after
the ALTER TABLE ...DETACH operation is committed. Access time is reduced
while the block indexes are created.
Prerequisites:
To detach a data partition from a partitioned table you must have the following
authorities or privileges:
v The user performing the DETACH operation must have the authority needed to
ALTER, to SELECT from and to DELETE from the source table.
v The user must also have the authority needed to create the target table.
Therefore, to alter a table to detach a data partition, the privilege held by the
authorization ID of the statement must include at least one of the following
authorities or privileges on the target table:
– SYSADM or DBADM authority
– CREATETAB authority on the database and USE privilege on the table spaces
used by the table as well as one of:
- IMPLICIT_SCHEMA authority on the database, if the implicit or explicit
schema name of the table does not exist
- CREATEIN privilege on the schema, if the schema name of the table refers
to an existing schema.
Restrictions:
You must meet the following conditions before you can perform a DETACH
operation:
v The table to be detached from (source table) must exist and be a partitioned
table.
v The data partition to be detached must exist in the source table.
v The source table must have more than one data partition. A partitioned table
must have at least one data partition. Only visible and attached data partitions
pertain in this context. An attached data partition is a data partition that is
attached but not yet validated by the SET INTEGRITY statement.
v The name of the table to be created by the DETACH operation (target table)
must not exist.
v DETACH is not allowed on a table that is the parent of an enforced referential
integrity (RI) relationship.
v If there are any dependent tables that need to be incrementally maintained with
respect to the detached data partition (these dependents table are referred to as
detached dependent tables), then the newly detached table is initially
inaccessible. The table will be marked with an L in the TYPE column of the
SYSCAT.TABLES catalog view. This is referred to as a detached table. This
prevents the table from being read, modified or dropped until the SET
INTEGRITY statement is run to incrementally maintain the detached dependent
tables. After the SET INTEGRITY statement is run on all detached dependent
tables, the detached table is transitioned to a regular table where it becomes
fully accessible.
Procedure:
You can alter a table from the DB2 Control Center or the DB2 command line
processor.
To use the DB2 Control Center to alter a partitioned table and to detach a data
partition from the table:
1. Expand the Table folder. The table objects are displayed in the contents pane of the
DB2 Control Center window.
2. Right-click the table that you want to modify and select Open Data Partitions from the
list of options.
3. In the Open Data Partitions window, select a data partition to detach.
4. Click the Detach button.
5. In the Detach Data Partition window, specify the table (schema and name) to create
upon detach.
6. In the Open Data Partitions window, click OK to modify the table.
To use the DB2 command line to alter a partitioned table and to detach a data
partition from the table, issue the ALTER TABLE statement with the DETACH
PARTITION clause.
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering partitioned tables” on page 336
v “Altering a table” on page 297
v “Dropping a data partition” on page 358
v “Attaching a data partition” on page 346
v “Rotating data in a partitioned table” on page 339
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
Note: If there are detached dependents then the detached data partition does not
become a stand-alone table at detach time. In this case, the SET INTEGRITY
statement must be issued to complete the detach and make the table
accessible.
Related concepts:
v “Data partitions” in Administration Guide: Planning
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Altering partitioned tables” on page 336
v “Altering a table” on page 297
v “Dropping a data partition” on page 358
v “Attaching a data partition” on page 346
v “Detaching a data partition” on page 352
v “Rotating data in a partitioned table” on page 339
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
v “Examples of rolling in and rolling out partitioned table data” on page 342
3 record(s) selected.
Procedure:
You can alter a table from the DB2 Control Center or the DB2 command line
processor (CLP).
To use the DB2 Control Center to alter a partitioned table and to add a new data
partition to the table:
1. Expand the Table folder. The table objects are displayed in the contents pane of the DB2
Control Center window.
2. Right-click the table that you want to modify and select Open Data Partitions from the
list of options.
3. In the Open Data Partitions window, select a data partition to add.
4. Click the Add button.
5. In the Add Data Partition window, specify the name, boundary specifications and
source table of the data partition.
6. In the Open Data Partitions window, click OK to modify the table.
To use the DB2 command line to alter a partitioned table and to add a new data
partition to the table, issue the ALTER TABLE statement with the ADD
PARTITION clause.
Related concepts:
v “Attributes of detached data partitions” on page 354
v “Data partitions” in Administration Guide: Planning
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Approaches to migrating existing tables and views to partitioned tables” on
page 198
v “Rotating data in a partitioned table” on page 339
v “Altering partitioned tables” on page 336
v “Dropping a data partition” on page 358
v “Attaching a data partition” on page 346
v “Detaching a data partition” on page 352
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
Prerequisites:
To detach a data partition from a partitioned table the user must have the
following authorities or privileges:
v The user performing the DETACH must have the authority needed to ALTER, to
SELECT from and to DELETE from the source table.
v The user must also have the authority needed to CREATE the target table.
Therefore, in order to alter a table to detach a data partition, the privilege held
by the authorization ID of the statement must include at least one of the
following on the target able:
– SYSADM or DBADM authority
– CREATETAB authority on the database and USE privilege on the table spaces
used by the table as well as one of:
- IMPLICIT_SCHEMA authority on the database, if the implicit or explicit
schema name of the table does not exist
- CREATEIN privilege on the schema, if the schema name of the table refers
to an existing schema.
To drop a table the user must have the following authorities or privileges:
Note: The implication of the detach data partition case is that the authorization ID
of the statement is going to effectively issue a CREATE TABLE statement
and therefore must have the necessary privileges to perform that operation.
The table space is the one where the data partition that is being detached
already resides. The authorization ID of the ALTER TABLE statement
becomes the definer of the new table with CONTROL authority, as if the
user had issued the CREATE TABLE statement. No privileges from the table
being altered are transferred to the new table. Only the authorization ID of
the ALTER TABLE statement and DBADM or SYSADM have access to the
data immediately after the ALTER TABLE ... DETACH PARTITION
operation.
Procedure:
You can detach a data partition of a partitioned table from the DB2 Control Center
or from the DB2 command line processor (CLP).
To use the DB2 Control Center to detach a data partition of a partitioned table:
1. Expand the Tables folder. The table objects are displayed in the contents pane of the
DB2 Control Center window.
2. Right-click the table that contains the data partition you want to detach, and select
Open Data Partitions from the list of actions.
3. In the Open Data Partitions window, click the Detach button.
4. Specify the required fields.
To use the DB2 command line to detach a data partition of a partitioned table,
issue the ALTER TABLE statement with the DETACH PARTITION clause.
You can drop a table from the DB2 Control Center or from the DB2 command line
processor (CLP).
1. Expand the Tables folder. The table objects are displayed in the contents pane of the
DB2 Control Center window.
2. Right-click on the table you want to drop, and select Drop from the pop-up menu.
3. Verify your change in the Confirmation window.
To use the DB2 command line to drop a table, issue the DROP TABLE statement.
Example:
In this example, the dec01 data partition is detached from table stock and placed in
table junk. You can then drop table junk, effectively dropping the associated data
partition.
Note: To make the DETACH operation as fast as possible, index cleanup on the
source table is done automatically using a background asynchronous index
cleanup process. If there are detached dependents then the detached data
partition does not become a stand-alone table at detach time. In this case,
the SET INTEGRITY statement must be issued to complete the detach and
make the table accessible.
Related concepts:
v “Attributes of detached data partitions” on page 354
v “Data partitions” in Administration Guide: Planning
v “Partitioned tables” in Administration Guide: Planning
Related tasks:
v “Adding data partitions to partitioned tables” on page 356
v “Altering partitioned tables” on page 336
v “Attaching a data partition” on page 346
v “Detaching a data partition” on page 352
v “Rotating data in a partitioned table” on page 339
Related reference:
v “Examples of rolling in and rolling out partitioned table data” on page 342
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “Guidelines and restrictions on altering partitioned tables with attached or
detached data partitions” on page 338
Rows in the target table that match the source can be deleted or updated based on
specified directions from within the MERGE statement. Rows that do not exist in
the target table can be inserted.
Restrictions:
The authorization ID associated with the MERGE statement must have the
appropriate privileges to carry out any of the three possible actions: update, delete,
or insert on the table or underlying table of the view. The authorization ID should
also have the appropriate privileges on the table or underlying table of the view in
the subquery.
If an error occurs in the MERGE statement, the entire set of operations associated
with the MERGE is rolled back.
Procedure:
The modification operations and signal statements can be specified more than once
per MERGE statement. Each row in the target table or view can be operated on
only once within a single MERGE statement. This means that a row in the target
table or view can be identified as MATCHED only with one row in the result table
of the table reference.
Consider a situation where there are two tables: shipment and inventory. Using the
shipment table, merge rows into the inventory table. For rows that match, increase
the quantity in the inventory table by the quantity in the shipment table.
Otherwise, insert the new part number into the inventory table.
MERGE INTO inventory AS in
USING (SELECT partno, description, count FROM shipment
WHERE shipment. partno IS NOT NULL) AS sh
ON (in.partno = sh.partno)
WHEN MATCHED THEN
UPDATE SET
description = sh.description
quantity = in.quantity + sh.count
WHEN NOT MATCHED THEN
INSERT
(partno, description, quantity)
VALUES (sh.partno, sh.description, sh.count)
Related reference:
v “MERGE statement” in SQL Reference, Volume 2
Procedure:
The following steps can help you recover an inoperative summary table:
If you do not want to recover an inoperative summary table, you can explicitly
drop it with the DROP TABLE statement, or you can create a new summary table
with the same name but a different definition.
Related reference:
v “GRANT (Table, View, or Nickname Privileges) statement” in SQL Reference,
Volume 2
v “SYSCAT.VIEWS catalog view” in SQL Reference, Volume 1
v “CREATE TABLE statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
Related concepts:
v “Typed tables” in Developing SQL and External Routines
Related reference:
v “DELETE statement” in SQL Reference, Volume 2
v “UPDATE statement” in SQL Reference, Volume 2
Prerequisites:
To delete the contents of a staging table, you need the following authorities:
v SYSADM or DBADM authority
v CONTROL privileges on the staging table being pruned
1. Open the Set Integrity window: From the Control Center, expand the object tree until
you find the Tables folder. Click on the Tables folder. Any existing tables are displayed
in the pane on the right side of the window. Right-click the table you want and select
Set Integrity from the pop-up menu. The Set Integrity window opens.
2. Review the Current integrity status of the table you are working with.
3. If the table is in Set Integrity Pending state, select the Immediate and checked and the
Prune check box in the Options group box to delete the contents of the staging table
and to propagate to the staging table.
4. If the table is not in Set Integrity Pending state, select the Prune radio button to delete
the contents of the staging table without propagating to the staging table.
Note: If you select the Immediate and checked radio button, the table will be brought
out of Set Integrity Pending state.
To delete the contents of a staging table using the command line, use the SET
INTEGRITY statement.
Related tasks:
v “Checking for constraint violations using SET INTEGRITY” on page 230
Related reference:
v “SET INTEGRITY statement” in SQL Reference, Volume 2
Dropping a table
A table can be dropped with a DROP TABLE SQL statement.
When a table is dropped, the row in the SYSCAT.TABLES catalog that contains
information about that table is dropped, and any other objects that depend on the
table are affected. For example:
v All column names are dropped.
v Indexes created on any columns of the table are dropped.
v All views based on the table are marked inoperative.
v All privileges on the dropped table and dependent views are implicitly revoked.
v All referential constraints in which the table is a parent or dependent are
dropped.
v All packages and cached dynamic SQL and XQuery statements dependent on
the dropped table are marked invalid, and remain so until the dependent objects
are re-created. This includes packages dependent on any supertable above the
subtable in the hierarchy that is being dropped.
v Any reference columns for which the dropped table is defined as the scope of
the reference become “unscoped”.
v An alias definition on the table is not affected, because an alias can be undefined
v All triggers dependent on the dropped table are marked inoperative.
v All files that are linked through any DATALINK columns are unlinked. The
unlink operation is performed asynchronously which means the files might not
be immediately available for other operations.
Procedure:
1. Expand the object tree until you see the Tables folder.
2. Right-click on the table you want to drop, and select Drop from the pop-up menu.
3. Check the Confirmation box, and click OK.
An individual table cannot be dropped if it has a subtable. However, all the tables
in a table hierarchy can be dropped by a single DROP TABLE HIERARCHY
statement, as in the following example:
DROP TABLE HIERARCHY person
The DROP TABLE HIERARCHY statement must name the root table of the
hierarchy to be dropped.
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related tasks:
v “Dropping a user-defined temporary table” on page 364
v “Recovering inoperative views” on page 331
Related reference:
v “DROP statement” in SQL Reference, Volume 2
Prerequisites:
When dropping such a table, the table name must be qualified by the schema
name SESSION and must exist in the application that created the table.
Restrictions:
Procedure:
When a user-defined temporary table is dropped, and its creation preceded the
active unit of work or savepoint, then the table is functionally dropped and the
application is not able to access the table. However, the table still has some space
reserved in its table space and this prevents the user temporary table space from
being dropped until the unit of work is committed or the savepoint is ended.
Related tasks:
v “Creating a user-defined temporary table” on page 212
Related reference:
v “DROP statement” in SQL Reference, Volume 2
v “SET SCHEMA statement” in SQL Reference, Volume 2
All indexes, primary keys, foreign keys, and check constraints referencing the table
are dropped. All views and triggers that reference the table are made inoperative.
All packages depending on any object dropped or marked inoperative will be
invalidated.
Procedure:
1. Expand the object tree until you see the Tables folder.
2. Right-click on the materialized query or staging table you want to drop, and select
Drop from the pop-up menu.
3. Check the Confirmation box, and click Ok.
To drop a materialized query or staging table using the command line, enter:
DROP TABLE <table_name>
The following SQL statement drops the materialized query table XT:
DROP TABLE XT
A materialized query table might be explicitly dropped with the DROP TABLE
statement, or it might be dropped implicitly if any of the underlying tables are
dropped.
A staging table might be explicitly dropped with the DROP TABLE statement, or it
might be dropped implicitly when its associated materialized query table is
dropped.
Related concepts:
v “Statement dependencies when changing objects” on page 366
Related reference:
v “DROP statement” in SQL Reference, Volume 2
Packages and cached dynamic SQL and XQuery statements can be dependent on
many types of objects.
A package that is in an invalid state is implicitly rebound on its next use. Such a
package can also be explicitly rebound. If a package was marked invalid because a
trigger was dropped, the rebound package no longer invokes the trigger.
In some cases, it is not possible to rebind the package. For example, if a table has
been dropped and not re-created, the package cannot be rebound. In this case, you
need to either re-create the object or change the application so it does not use the
dropped object.
In many other cases, for example if one of the constraints was dropped, it is
possible to rebind the package.
Related concepts:
v “Package recreation using the BIND command and an existing bind file” in
Developing Embedded SQL Applications
v “Rebinding existing packages with the REBIND command” in Developing
Embedded SQL Applications
Related reference:
v “SYSCAT.PACKAGEAUTH catalog view” in SQL Reference, Volume 1
v “SYSCAT.PACKAGEDEP catalog view” in SQL Reference, Volume 1
v “SYSCAT.PACKAGES catalog view” in SQL Reference, Volume 1
v “BIND command” in Command Reference
v “REBIND command” in Command Reference
v “DROP statement” in SQL Reference, Volume 2
Note: Some administration tasks are invoked from folders on the object tree, while
others are invoked from individual object icons. For example, the task of
creating a database is invoked from the Databases folder, while configuring
a database is invoked on the database itself.
Right-click the object. A pop-up menu of all of the available administration actions
for that object opens. Click the task in the pop-up menu. A window or notebook
opens to guide you through the steps required to complete the action for the
selected object.
For more information, see the Administration Guide , the Command Reference , and
the SQL Reference.
Related concepts:
v “Control Center overview” on page 376
When you shut down the server DB2 administration tools, all connections are
dropped and the windows for all open centers close.
Related tasks:
v “Setting startup and default options for the DB2 administration tools” on page
436
v “Setting the server administration tools startup property” on page 434
These windows contain the following information about the DB2 administration
tools:
v Product identifier : identifies the product in the format pppvvrrm , where ppp is
the product, vv is the version, rr is the release, and m is the modification level.
v Level identifier , Level , Build level , and PTF : identifies the level of service
applied to the DB2 administration tools. This information changes as FixPaks
and other service items are applied.
v Level of the Java code base : only on the About DB2 Administration Tools
Environment window.
v Operating system : only on the About System window.
To copy the information in the window to the clipboard, click Copy to Clipboard.
You can then paste the information into a file, e-mail, or other application.
Related concepts:
v “Control Center overview” on page 376
To find a topic when you are not sure where to start, use one or more of the
following methods:
v Select a topic from the DB2 Information Center.
v Type the relevant keywords in the Search field of the DB2 Information Center,
and click GO .
v Select a topic from the table of contents in the left pane of the help window.
v Look up a term in the glossary by clicking the Glossary link in the left pane
help window.
v Look up a keyword in the online index by clicking the Index link in the left
pane help window.
Help prefixed with the version icon, , indicates that it applies to a specific
version of the product.
To print a help topic, click the File–>Print , or right-click anywhere in the topic
text and click Print .
In the DB2 database help, graphics are used to indicate when information applies
to a subset of situations:
v Information that pertains only to partitioned database environments is
prefixed with the icon shown at the beginning of this sentence.
v Information that pertains only to single-partition database environments is
prefixed with the icon shown at the beginning of this sentence.
Related tasks:
v “Setting up access to DB2 contextual help and documentation” on page 435
Related reference:
v “DB2 Help menu” on page 375
Environment-specific information
Information marked with this icon pertains only to single-partition
database environments.
Related concepts:
v “Control Center overview” on page 376
DB2 toolbar
Control Center
Opens the Control Center to enable you to display all of your systems,
databases, and database objects and perform administration tasks on them.
Replication Center
Opens the Replication Center to enable you to design your replication
environment and set up your replication environment.
Command Editor
Opens the Command Editor to enable you to work with database
commands, their results, and access plans for queries.
Task Center
Opens the Task Center to enable you to create, schedule, and execute tasks.
Health Center
Opens the Health Center to enable you to work with alerts generated
while using the DB2 database manager.
Journal
Opens the Journal to enable you to schedule jobs that are to run
unattended and view notification log entries.
License Center
Opens the License Center to enable you to display license status and usage
information for the DB2 products installed on your system and use the
License Center to configure your system for license monitoring.
Configuration Assistant
Opens the Configuration Assistant to enable you to configure your
workstation to access your DB2 subsystem.
Tools Settings
Opens the Tools Settings notebook to enable you to customize settings and
properties for the administration tools and for replication tasks.
Legend
Help
Displays information about how to use help for this product.
Related tasks:
v “Changing the fonts for menus and text” on page 437
Related reference:
v “DB2 Help menu” on page 375
v “DB2 secondary toolbar” on page 373
v “DB2 Tools menu” on page 374
Use the toolbar below the contents pane to tailor the view of objects and
information in the contents pane to suit your needs.
Sort
Opens the Sort window so that you can select the order in which objects
are displayed in the contents pane. You can sort on any column, or on
multiple columns, in the contents pane (ascending or descending).
Filter
Opens the Filter window so that you can filter the objects that appear in
the contents pane.
Customize columns
Opens the Customize Columns window so that you can select the order of
the informational columns in the contents pane and reorder, include, or
exclude them.
Find
Opens the Find window so that you can search for a string in the columns
of the contents pane.
Select all
Selects all of the objects in the contents pane.
Deselect all
Deselects all selected objects in the contents pane.
Expand all
Expands all of the objects in the contents pane.
Collapse all
Related tasks:
v “Changing the fonts for menus and text” on page 437
Related reference:
v “DB2 Help menu” on page 375
v “DB2 toolbar” on page 371
v “DB2 Tools menu” on page 374
From this menu, you can select the following menu items. Depending on which
tool you are using, some of these menu options will not display.
Wizards
Opens the Wizards window so that you have quick access to the more
common DB2 wizards.
Control Center
Opens the Control Center to enable you to manage systems, DB2 database
instances, DB2 Universal Database for OS/390 and z/OS subsystems,
databases, and database objects such as tables and views. In the Control
Center, you can display all of your systems, databases, and database
objects and perform administration tasks on them.
Replication Center
Opens the Replication Center to enable you to administer relational data
between DB2 servers or databases.
Satellite Administration Center
Opens the Satellite Administration Center so that you can set up and
administer both satellites, and the information that is maintained in the
satellite control tables at a central DB2 control server.
Command Editor
Opens the Command Editor to enable you to execute DB2 CLP commands
and query statements, z/OS or OS/390 operating system commands, or
command scripts. This editor also lets you view a graphical representation
of the access plan for explained SQL and XQuery statements.
Task Center
Opens the Task Center to enable you to create, schedule, and run tasks
such as DB2 or operating system command scripts, MVS™ shell scripts, JCL
scripts.
Health Center
Opens the Health Center to enable you to monitor instances using the
Health Center. This center also alerts you to potential problems and
Related tasks:
v “Changing the fonts for menus and text” on page 437
Related reference:
v “DB2 Help menu” on page 375
v “DB2 secondary toolbar” on page 373
v “DB2 toolbar” on page 371
Related concepts:
v “Control Center overview” on page 376
Related reference:
v “DB2 secondary toolbar” on page 373
v “DB2 toolbar” on page 371
v “DB2 Tools menu” on page 374
Control Center
This section describes how to use the Control Center, including how it can be
extended.
The Control Center supports the native XML data type for many of its functions.
This allows database administrators to work with XML documents stored in XML
columns alongside relational data.
Note: As you work with the Control Center, you might encounter information
prefixed with the icon. This means that the associated information
applies only if you are working in a partitioned database environment.
The following are some of the key tasks that you can perform with the Control
Center:
v Add DB2 database systems, federated systems, DB2 UDB for z/OS and OS/390
systems, IMSysplexes, instances, databases, and database objects to the object
tree.
v Manage database objects. You can create, alter, and drop databases, table spaces,
tables, views, indexes, triggers, and schemas. You can also manage users.
v Manage data. You can load, import, export, and reorganize data. You can also
gather statistics.
v Perform preventive maintenance by backing up and restoring databases or table
spaces.
v Configure and tune instances and databases.
v Manage database connections, such as DB2 Connect servers and subsystems.
v Manage IMS systems.
v Manage DB2 UDB for z/OS and OS/390 subsystems.
In many cases, advisors, launchpads, and wizards are available to help you
perform these tasks quickly and easily.
You can select or change your view by choosing Tools from the menu bar and
selecting Customize the Control Center. You can then use your Control Center
view to work with the various folders and the objects that they contain (the objects
within a folder are called folder objects).
The Control Center has six action areas that you can use to define, manage, and
work with DB2 objects.
Use the menu bar to work with folders and folder objects in the Control
Center, open other DB2 centers and tools, access advisors, wizards and
launchpads, and display online help.
Control Center toolbar
Use the toolbar icons below the menu bar to access other DB2 centers and
tools and display online help. Note that the icons on this toolbar reflect the
set of administration tools installed and might be different than those
shown in the graphic above.
Object tree
Use the object tree to display and work with folders and folder objects.
Selecting an item displays related objects, actions, and information in the
contents pane and the object details pane. Right-clicking an item displays a
pop-up menu listing all the actions that you can perform on that item.
Contents pane
Use the contents pane to display and work with folder objects. The
contents pane displays those objects that make up the contents of the
folder that is selected in the object tree. Selecting an object displays its
associated actions and information in the object details pane.
Contents pane toolbar
Use the toolbar below the contents pane to tailor the view of objects and
information in the contents pane to suit your needs. These functions are
also available from Edit and View in the menu bar.
Use the object details pane to display information on and work with the
folder or folder object that you have selected in the object tree or contents
pane. If the object details pane is not displayed, select View from the menu
bar and select Show Object Details pane.
The object details pane is only available for Windows, Linux, and UNIX
operating systems, and only when database objects are selected from the
object tree.
Related concepts:
v “Command Editor overview” in Online DB2 Information Center
v “Configuration Assistant overview” in Online DB2 Information Center
v “Guidelines for Control Center plugin developers” on page 395
v “Introducing the plug-in architecture for the Control Center” on page 395
v “Journal overview” on page 418
v “Task Center overview” on page 416
v “Visual Explain overview” on page 451
v “Writing plugins as Control Center extensions” on page 397
Related tasks:
v “Displaying objects in the Control Center” on page 392
v “Expanding and collapsing the Control Center object tree” on page 389
v “Getting help in the Control Center” on page 385
v “Managing database partitions from the Control Center” on page 282
v “Obtaining Control Center diagnostic information” on page 393
Related reference:
v “Control Center Legend” on page 380
Objects:
System
A computer system defined to the DB2 system DB2 administration tools.
Instance
A database manager environment that is an image of the actual database
manager environment. You can have several instances of a database
manager on the same system.
Database
A DB2 relational database. A relational database presents data as a
collection of tables.
Table
A named data object consisting of a specific number of columns and some
unordered rows.
View
A logical table that consists of data that is generated by a query.
Schema
A collection of database objects such as tables, views, indexes, and triggers.
It provides a logical classification of database objects.
Index
A set of pointers that are logically ordered by the values of a key. Indexes
provide quick access to data and can enforce uniqueness on the rows in
the table.
Trigger
An object in a database that is invoked indirectly by the database manager
when a particular SQL statement is run.
Table Space
An abstraction of a collection of containers into which database objects are
stored.
Buffer Pool
An area of storage in which all buffers of a program are kept.
User-Defined Function
A function that is defined to the database management system and can be
referenced in SQL queries.
Package
A control structure produced during program preparation that is used to
execute SQL statements.
Stored Procedures
A block of procedural constructs and embedded SQL statements that is
stored in a database and can be called by name.
DB User
A user that has specific database privileges.
DB Group
A group of users that has specific database privileges.
Partition
In a partitioned database environment, one part of either a database
partition, a table space partition, or a portion of a table.
Database Partition
In a partitioned database environment, a part of the database that consists
of its own user data, indexes, configuration files, and transaction logs.
Table Parition
A table in a database partition group consisting of multiple partitions,
some of its rows are stored in one partition, and other rows are stored in
other partitions.
A set of a particular object type is represented by a folder that has the icon for that
type displayed on top of it. For example, a set of systems is represented by the
following icon:
Related concepts:
v “Control Center overview” on page 376
To open a new Control Center, click the icon on the toolbar, or right-click a
system, subsystem, instance, database folder, or another object, and click Open
new Control Center in the pop-up menu. The new Control Center opens in a
separate window. If this action is performed in another center, the last Control
Center opened is brought to the front.
The object tree in the new Control Center will start with the object you selected to
open the new Control Center.
With a second Control Center, you can work with two or more objects that are not
easily displayed in a single object tree or contents pane. This feature is especially
useful when you want to look at the contents of two folders at the same time.
Related concepts:
v “Control Center overview” on page 376
To create databases in the Control Center, you need either SYSADM or SYSCTRL
authority. For information on the authorities needed to create other database
objects, such as tables and table spaces, see the authorities and privileges
information for creating the specific object.
Expand the object tree to display a folder for the type of object that you want to
create. Right-click the folder. A pop-up menu of all of the available actions for the
object opens. Click the Create or Create from Import menu item, if it is available.
A window or notebook opens to guide you through the process for creating the
object.
Related concepts:
v “About databases” in Administration Guide: Planning
v “Control Center overview” on page 376
Note:
For remote systems, it is recommended that you use the remote hostname as
the system name. This will ensure that the system names are unique in the
Control Center object tree.
v When you change a system name displayed in the Control Center using
the Change System window, the system name of the nodes under the
same system are also updated.
v When you change a system name using the CLP, you will have to update
each child node individually.
Prerequisites:
Procedure:
To change a system name using either the Control Center or the Configuration
Assistant:
1. Open the Change System window using one of the following methods:
v From the Control Center, expand the object tree until you find the system that you
want to change. Right-click the All Systems folder and select Change from the
pop-up menu. The Change System window opens.
v From the Configuration Assistant Advanced view, click the Systems tab. Select the
system that you want to change and click Selected–>Change System. The Change
System window opens.
Note: You cannot open the Change System window from the Selected menu unless
you are in the Advanced view. To switch to the Advanced view, select
View–>Advanced View.
2. Change the system name and other fields as required. If you are changing protocol
information, you may require the assistance of your network or database administrator.
For example, suppose that you have the following entries in the node directory
and in the admin node directory:
DB2 LIST ADMIN NODE DIRECTORY SHOW DETAIL
Node Directory
Number of entries in the directory = 1
Node 1 entry:
Node 1 entry:
To change the system name from HONCHO to PLATO, you would issue the
following commands to recatalog the above nodes with a new system name:
DB2 UNCATALOG NODE HONCHO
DB2 CATALOG ADMIN TCPIP NODE HONCHO REMOTE HONCHO SYSTEM PLATO OSTYPE WIN
DB2 UNCATALOG NODE NODE1
DB2 CATALOG TCPIP NODE NODE1 REMOTE HONCHO SERVER 78787
REMOTE_INSTANCE db2inst1 SYSTEM PLATO OSTYPE WIN
On restarting the Control Center, the system name is now displayed as PLATO.
The system will still have a single instance (db2inst1) located under it in the object
tree.
Related concepts:
v “About systems” in Administration Guide: Planning
Related tasks:
v “Cataloging database systems” on page 177
Related reference:
v “CATALOG TCPIP/TCPIP4/TCPIP6 NODE command” in Command Reference
v “LIST NODE DIRECTORY command” in Command Reference
v “UNCATALOG NODE command” in Command Reference
Use the toolbar icon or the Help menu to get help or additional information.
These are the types of help and information that you can get:
Opens the DB2 Information Center to enable you to search for help on
tasks, commands, and other information on DB2 and IMS.
Help menu
Displays menu items for displaying the master index, general information
about the Control Center, and keyboard help . This menu also provides
links to:
v Tutorials available with DB2
v How to use the help
v The DB2 Information Center
v Product information.
Related concepts:
v “Features of the DB2 Information Center” in Online DB2 Information Center
v “Control Center overview” on page 376
Related tasks:
v “Keyboard shortcuts and accelerators (all centers)” in Online DB2 Information
Center
You can open the following advisors, wizards, and launchpads from the Wizards
window accessed from the Control Center Tools–>Wizards menu:
v Add Partitions launchpad
v Backup wizard
v Create Database wizard . See also the Create Your Own Database wizard,
accessed from the Control Center or from First Steps.
v Create Table Space wizard
v Create Table wizard
v Design advisor
v Load wizard
v Configuration advisor
v Restore wizard
v Configure Database Logging wizard
v Set Up Activity Monitor wizard
v Set Up High Availability Disaster Recovery (HADR) Databases wizard
The following wizards and launchpads are available from other parts of the DB2
product.
For IMS:
v Query Database wizard
v Query Transaction wizard
v Update Database wizard
v Update Data Group wizard
v Update Transaction wizard
Related concepts:
v “Control Center overview” on page 376
Wizard overviews
This section contains two examples of wizard overviews, accessed from the first
page of the wizard.
Prerequisites:
Before you create a backup plan: If you change a database configuration file to
enable rollforward recovery (using either LOGRETAIN or USEREXIT), you must
take an offline backup of the database before it is usable.
Procedure:
Note: You can select one or more table space objects to back up.
Right-click on the object and select Backup from the pop-up
Note: The Backup wizard opens. To back up a database partition, open the
Backup wizard from the database object. To back up a table space
partition, open the Backup wizard from the table space object.
2. Complete each of the applicable wizard pages. Click the wizard overview link
on the first page for more information. The Finish push button is available
when you complete enough information for the wizard to back up the objects
in your database.
Related concepts:
v “Backup overview” in Data Recovery and High Availability Guide and Reference
Related reference:
v “BACKUP DATABASE command” in Command Reference
Prerequisites:
v To restore a database or a database partition, you must have SYSADM,
SYSCTRL, or SYSMAINT authority.
v To restore to a new database, you must have SYSADM or SYSCTRL authority.
v To restore a table space or table space partition, you must have SYSADM,
SYSCTRL, or SYSMAINT authority.
Before you can restore a database you must have an exclusive connection; that is,
no applications can be running against the database when the task is started. Once
it starts, it prevents other applications from accessing the database until the restore
is completed.
Procedure:
Note: To restore a database partition, open the Restore wizard from the
database object. To restore a table space partition, open the Restore
wizard from the table space object.
2. Complete each of the applicable wizard pages. Click the wizard overview link
on the first page for more information. The Finish push button is available
when you complete enough information for the wizard to restore the objects in
your database.
Related concepts:
v “Restore overview” in Data Recovery and High Availability Guide and Reference
Related reference:
v “RESTORE DATABASE command” in Command Reference
For example, look at the All Systems or the All Databases folder displayed on the
object tree. If you click the plus sign (+) next to the All Systems folder, icons
representing your local workstation and any remote systems connected to your
local system are displayed. If you click the plus sign (+) next to a particular system
icon, the Instances folder and any instances residing on that system are displayed.
Similarly, if you click the plus sign (+) beside the All Databases folder, you will
see the list of catalogued databases.
To collapse the object tree to the All Systems folder, click the minus sign (-) next to
the All Systems folder. All of the objects under the All Systems folder are no
longer displayed.
Related concepts:
v “Control Center overview” on page 376
To use the Control Center to manage DB2 UDB for z/OS subsystems, you must
first add the subsystems to the object tree.
To add the subsystem to the object tree, configure a connection to the subsystem to
enable you to access the objects in that subsystem. You will have to know the host
system and subsystem names, the communication protocol, and the
communication protocol parameters to use for connecting to the subsystem.
If you have installed the Configuration Assistant on the workstation where you are
running the Control Center, you can use the Configuration Assistant to configure
your workstation to access your DB2 subsystem. Otherwise, you must use the
command line processor to add the subsystem.
Related concepts:
v “Control Center overview” on page 376
To set the DB2 database parameter FEDERATED to YES, right-click the DB2
instance that you want to use as your federated database instance and select
Configure Parameters. The DBM Configuration window appears. In the list of
Environment parameters, change the FEDERATED parameter to YES and click OK.
To install WebSphere Federation Server, follow the steps that come with the
WebSphere Federation Server software.
To add the federated system objects required for the data sources that you want to
access, select the database that you want to use as your federated database and
right-click the Federated Database Objects folder. Click Create Federated Objects
to launch a wizard that guides you through the steps to create all of the necessary
federated objects and adds the federated objects to the tree.
Related tasks:
v “Expanding and collapsing the Control Center object tree” on page 389
If you install a new computer system or create a new instance and you want to use
the Control Center to perform tasks on it, you must add it to the object tree.
If you remove a database, or uncatalog it outside of the Control Center, and you
want to use the Control Center to perform tasks on it, you must add it to the
object tree.
To add systems, instances, or databases to the object tree, you need SYSADM or
SYSCTRL authority.
Expand the object tree to display the folder for the type of object (system, instance,
or database) that you want to add. Right-click the folder. A pop-up menu of all of
the available actions for the object opens. Click Add. The Add window opens.
For adding DB2 systems, instances, or databases in the Add window, you can
access a list of existing remote systems, instances, or databases by clicking the
Discover push button. This is not an option for adding IMS systems.
To update the view of the objects displayed in the contents pane, use one of the
following methods:
v Click View –> Refresh.
v Right-click a folder or object and click Refresh in the pop-up menu.
The contents pane displays the contents of the object selected on the object tree.
Related concepts:
v “Control Center overview” on page 376
To delete an object within a custom folder, select it and click Alter in the pop-up
menu. A system or database removed from a custom folder can still be found
under the All Systems or All Databases folders, and any other custom folders
where it has been placed.
Note: Either of the All Systems or All Databases folders might themselves be
completely hidden by using the Control Center View window .
Related concepts:
v “Control Center overview” on page 376
Related concepts:
v “Control Center overview” on page 376
To display objects in the object tree and contents pane, expand the object tree by
clicking on the plus signs (+) next to objects. As you expand the object tree down
from a particular object, the objects that reside in, or are contained in, that object
are displayed underneath. At the lowest level of the tree, folders of objects (such as
tables) that do not contain other objects are displayed.
Click an object (folder or icon) in the object tree. The objects that reside in, or are
contained in, the selected object are displayed in the contents pane. Systems,
subsystems, instances, and databases are displayed in both the object tree and the
contents pane. Objects that do not contain other objects are displayed only in the
contents pane. For example, when you click a Tables folder, all of the tables in the
database are displayed in the contents pane.
Related concepts:
v “Control Center overview” on page 376
Use the Overview by categories rollup menu on the contents pane toolbar and the
View menu to perform functions on the table data in the contents pane.
Tasks:
v Naming or saving the contents of the details view
v Filtering the list of displayed columns or table data
v Sorting the list of displayed table data
v Customizing the list of displayed columns
Related concepts:
v “Control Center overview” on page 376
On AIX and Linux platforms, use the db2cc -t command. The output is displayed
on the console. You might direct the output to a file.
For Windows platforms only, there is also a db2cctrc command for getting a
Control Center trace. Use it as follows:
db2cctrc file1 [cc-options]
v ″file1″ is the file where Control Center trace output is written. If no path name is
specified, this file is created in the tools directory under the sqllib directory.
v ″[cc-options]″ (optional) see the documentation for the db2cc command for a list
of available options
Note that standard out and standard error information are written to the console.
Attention::
For performance reasons, only use these commands when directed to do so by DB2
Customer Service or by a technical support representative.
Related concepts:
v “Basic trace diagnostics” in Troubleshooting Guide
v “Diagnostic tools (Linux and UNIX)” in Troubleshooting Guide
v “Interpreting diagnostic log file entries” in Troubleshooting Guide
Related tasks:
v “Setting the diagnostic log file error capture level” in Troubleshooting Guide
Use in the contents pane toolbar, or click Edit–>Find to find objects in the
contents pane . Click a folder in the object tree to display the objects with which
you want to work.
In the Find string field, type the character string that you want to find. The first
object in the contents pane that meets the find search criteria is selected. If you
want to find the next object meeting the find criteria, click Edit–>Find.
Select the Case sensitive check box when searching for case sensitive strings.
Related concepts:
v “Control Center overview” on page 376
v The Filter window opens when you click View→Filter or when you click the
from the contents pane toolbar. The filter action filters the objects after they are
retrieved from the database.
In the details view, the pre-filter and filter effects are cumulative. The same object
is selected in the object tree so that its pre-filtered children appear in the filtered
details view. That is, the number of filtered pre-filtered objects are less than or the
same as those that are only pre-filtered.
To change the default for filtering when number of rows exceed the default value,
see Setting startup and default options for the DB2 administration tools
Related concepts:
v “Control Center overview” on page 376
Related tasks:
v “Setting startup and default options for the DB2 administration tools” on page
436
The concept of the plug-in architecture is to provide the ability to add items for a
given object in the Control Center popup menu, add objects to the Control Center
tree, and add new buttons to the tool bar. A set of Java interfaces, which you must
implement, is shipped along with the tools. These interfaces are used to
communicate to the Control Center what additional actions to include.
The plug-in extensions (db2plug.zip) are loaded at the startup time of the Control
Center tools. This might increase the startup time of the tools, depending on the
size of the ZIP file. However, for most users, the plug-in ZIP file, will be small and
the impact should be minimal.
Related concepts:
v “Compiling and running the example plugins” on page 396
v “Guidelines for Control Center plugin developers” on page 395
v “Writing plugins as Control Center extensions” on page 397
Related concepts:
v “Compiling and running the example plugins” on page 396
v “Writing plugins as Control Center extensions” on page 397
Related reference:
v “db2cc - Start control center command” in Command Reference
Note: The plugin sample programs might contain updates which are not yet
reflected here. The example code and java documentation should be
considered the most current information when there are differences with
what is shown here.
To run the example plugins, you must ZIP the extension class files according to the
rules of a Java archive file. The ZIP file (db2plug.zip) must be in the classpath. On
Windows operating systems, put db2plug.zip in the DRIVE:\sqllib\tools directory
where DRIVE: represents the drive on which DB2 is installed. On UNIX platforms,
put db2plug.zip in the /u/db2inst1/sqllib/tools directory where /u/db2inst1
represents the directory on which DB2 is installed.
Note: The db2cc command sets the classpath to point to db2plug.zip in the tools
directory.
To compile any of these example java files, the following must be included in your
classpath:
v On Windows platforms use:
– DRIVE: \sqllib\java\Common.jar
– DRIVE: \sqllib\tools\db2navplug.jar
where DRIVE represents the drive on which DB2 is installed.
Create the db2plug.zip to include all the classes generated from compiling the
example java file. The file should not be compressed. For example, issue the
following:
zip -r0 db2plug.zip *.class
This command places all the class files into the db2plug.zip file and preserves the
relative path information.
Related concepts:
v “Guidelines for Control Center plugin developers” on page 395
v “Writing plugins as Control Center extensions” on page 397
Related reference:
v “db2cc - Start control center command” in Command Reference
Related tasks:
v “Adding a menu item only to an object with a particular name” on page 402
v “Adding an example object under the folder” on page 405
v “Adding the alter action” on page 410
v “Adding the create action” on page 407
v “Adding the folder to hold multiple objects in the tree” on page 403
v “Adding the remove action with multiple selection support” on page 409
v “Creating a basic menu action” on page 399
v “Creating a basic menu action separator” on page 401
v “Creating a plugin that adds a toolbar button” on page 398
v “Creating sub menus” on page 401
v “Positioning the menu item” on page 400
v “Setting attributes for a plugin tree object” on page 406
For this example, a toolbar button is added, so getObjects should return a null
array, as follows:
import com.ibm.db2.tools.cc.navigator.*;
import java.awt.event.*;
import javax.swing.*;
Related concepts:
v “Compiling and running the example plugins” on page 396
In this slightly more advanced topic, new commands will be added to the popup
menu of the Database object.
The second step is to create a CCObject for the Database object in the tree, as
follows:
class CCDatabase implements CCObject {
Because no other features other than the ability to add menu items to Control
Center built-in objects are used (for example, the Database object in this example),
most functions will return null or true. To specify that this object represents the
DB2 database object, its type is specified as UDB_DATABASE, a constant in
CCObject. The class is named CCDatabase in this example, however class names
should be as unique as possible, since there might be other vendor’s plugins in the
same zip file as your plugin. Java packages should be used to help ensure unique
class names.
You can create multiple CCObject subclasses whose type is UDB_DATABASE, but
if the values returned from their isEditable or isConfigurable methods conflict, the
objects that return false override those that return true.
There are two methods to implement: getMenuText and actionPerformed. The text
displayed in the menu is obtained using getMenuText. When a user clicks your
menu item, the event that is triggered results in a call to actionPerformed.
The following example class displays a menu item called ″Example2a Action″
when a single database object is selected. When the user clicks this menu item, the
message ″Example2a menu item actionPerformed″ is written to the console.
class Example2AAction implements CCMenuAction {
Finally, attach this menu item to your DB2 database CCObject by adding the
following to your CCObject.
public CCMenuAction[] getMenuActions () {
return new CCMenuAction[] { new Example2AAction() };
}
Related concepts:
v “Compiling and running the example plugins” on page 396
Related tasks:
v “Adding a menu item only to an object with a particular name” on page 402
v “Creating a basic menu action separator” on page 401
v “Creating sub menus” on page 401
v “Positioning the menu item” on page 400
When creating the basic menu item, the position of the menu item within the
menu is not specified. The default behavior when adding plugin menu items to a
menu is to add them on the end, but before any Refresh and Filter menu items.
You can override this behavior to specify any position number from zero up to the
number of items in the menu, not counting the Refresh and Filter menu items.
Change your CCMenuAction subclass to implement Positionable and then
implement the getPosition method, as follows:
class Example2BAction implements CCMenuAction, Positionable {
Related tasks:
v “Adding a menu item only to an object with a particular name” on page 402
v “Creating a basic menu action” on page 399
v “Creating a basic menu action separator” on page 401
v “Creating sub menus” on page 401
Related tasks:
v “Adding a menu item only to an object with a particular name” on page 402
v “Creating a basic menu action” on page 399
v “Creating sub menus” on page 401
v “Positioning the menu item” on page 400
Related tasks:
v “Adding a menu item only to an object with a particular name” on page 402
v “Creating a basic menu action” on page 399
v “Creating a basic menu action separator” on page 401
v “Positioning the menu item” on page 400
Currently, any database you display in the Control Center will show the plugin
menu items you’ve written. You can restrict these menu items to a database of a
particular name by returning that name in the getName method of CCDatabase.
This must be a fully qualified name. In this case, since it refers to a database, the
system, instance and database names must be included in what is returned in the
getName method. These names are separated by ″ – ″. Here is an example for a
system named MYSYSTEM, an instance named DB2, and a database named
SAMPLE.
class CCDatabase implements CCObject {
...
public String getName () { return "MYSYSTEM – DB2 – SAMPLE"; }
...
}
Related tasks:
v “Creating a basic menu action” on page 399
v “Creating a basic menu action separator” on page 401
v “Creating sub menus” on page 401
Creating a plug-in that adds plug-in objects under Database in the tree: The
following procedure outlines how to create a plug-in that adds plug-in objects
under Database in the tree:
1. Adding the folder to hold multiple objects in the tree
2. Adding an example object under the folder
3. Setting attributes for a plug-in tree object
4. Adding the create action
5. Adding the remove action
6. Adding the alter action
case OPEN_FOLDER:
return CommonImageRepository.getScaledIcon(CommonImageRepository.NV_OPEN_
FOLDER);
default:
return CommonImageRepository.getScaledIcon(CommonImageRepository.NV_CLOSED_
FOLDER);
}
}
Notice that getType now makes use of a class CCTypeFactory. The purpose of
CCTypeFactory is to prevent two objects from using the same type number so that
The getIcon method takes a parameter for iconState that lets you know if you are
an open or closed folder. You can then make your icon correspond to your state, as
above.
In order to show the folder in the details view when the database is selected and
not just in the tree, getData needs to return a single column whose value is the
plugin object itself. The getData method assigns the this reference to the first
element of the data array. This allows both the icon and the name to appear in the
same column of the details view. The Control Center, when it sees that you are
returning a CCTableObject subclass, knows that it can call getIcon and getName on
your Example3Folder.
public CCDatabase() {
childVector = new Vector();
childVector.addElement(new Example3Folder());
}
Related concepts:
v “Compiling and running the example plugins” on page 396
Related tasks:
v “Adding an example object under the folder” on page 405
v “Adding the alter action” on page 410
v “Adding the create action” on page 407
The first step is to create a CCObject implementation for the child object as
follows:
class Example3Child implements CCTableObject {
private String parentName = null;
public String getName () { return null; }
public boolean isEditable () { return false; }
public boolean isConfigurable () { return false; }
public void getData (Object[] data) { }
public CCColumn[] getColumns () { return null; }
public Icon getIcon (int iconState) { return null; }
public CCMenuAction[] getMenuActions () { return null; }
public void setParentName(String name)
{
parentName = name;
}
public int getType () { return CCTypeFactory.getTypeNumber
(this.getClass().getName()); }
}
public Example3Folder() {
childVector = new Vector();
}
...
public CCTableObject[] getChildren () {
CCTableObject[] children = new CCTableObject[childVector.size()];
childVector.copyInto(children);
return children;
}
public void setParentName(String name)
{
parentName = name;
}
...
}
For simplicity, in this example getChildren returns a array of children which are
stored in the vector called childVector.
A real plugin should reconstruct the children when getChildren is called. This will
refresh the list which might include new or changed child objects which might
have been created or changed outside the Control Center since the last time the list
was displayed. The children should be stored in and read from persistent storage
so that they are not lost.
Related concepts:
v “Compiling and running the example plugins” on page 396
Related tasks:
v “Adding the alter action” on page 410
v “Adding the create action” on page 407
v “Adding the folder to hold multiple objects in the tree” on page 403
v “Adding the remove action with multiple selection support” on page 409
v “Setting attributes for a plugin tree object” on page 406
If you expand the tree to your plugin folder and select it, you will see that there
are no columns in the details pane. This is because the Example3Child
implementation of getColumns is returning null. To change this, first create some
CCColumn implementations. We will create two columns because a future example
will demonstrate how to change the value of one of these columns at run time and
every object should have one column that should never change. We will call the
unchanging column “Name” and the changing column “State”.
class NameColumn implements CCColumn {
getName() { return "Name"; }
getColumnClass { return CCTableObject.class; }
}
The class types supported include the class equivalents of the Java primitives (such
as java.lang.Intger), the java.util.Date class, and the CCTableObject class.
You must also change the parent to include the same columns.
class Example3Folder implements CCTableObject {
...
public CCColumn[] getColumns () {
return new CCColumn[] { new NameColumn(),
new StateColumn() };
}
...
}
In this case, the first column, which was of class CCTableObject will have a value
of this. This allows the Control Center to render both the text returned by getName
and the icon returned by getIcon. So the next step is to implement these. We will
just use the same refresh icon used in Example 1 for the tool bar button.
class Example3Child implements CCTableObject {
...
public String getName () {
return name;
}
public Icon getIcon () {
return CommonImageRepository.getScaledIcon(CommonImageRepository.WC_NV_
REFRESH);
}
...
}
To see the results of your work so far, you can create an example child object that
you will remove in the next exercise. Add an instance of Example3Child to the
Example3Folder when the childVector is constructed.
public class Example3Folder implements CCTreeObject {
...
public Example3Folder() {
childVector = new Vector();
childVector.addElement(new Example3Child("Plugin1", "State1"));
}
...
}
Related concepts:
v “Compiling and running the example plugins” on page 396
Related tasks:
v “Adding an example object under the folder” on page 405
v “Adding the alter action” on page 410
v “Adding the create action” on page 407
v “Adding the folder to hold multiple objects in the tree” on page 403
Next, add a menu item to the folder to allow the user to trigger a call to your new
addChild method.
class CreateAction implements CCMenuAction {
private int pluginNumber = 0;
public String getMenuText () { return "Create"; }
The ActionEvent will always contain a Vector of all of the objects on which the
action was invoked. Since this action will only be invoked on an Example3Folder
and there can be only one folder, only the first object is cast in the Vector and
addChild is called on it.
The last step is to add the menu action to your folder and you can now remove
the sample object that was added earlier.
public class Example3Folder extends Observable implements CCTreeObject {
private CCMenuAction[] menuActions =
new CCMenuAction[] { new CreateChildAction(); }
...
public Example3Folder() {
childVector = new Vector();
}
...
public CCMenuAction[] getMenuActions () {
Related concepts:
v “Compiling and running the example plugins” on page 396
Related tasks:
v “Adding an example object under the folder” on page 405
v “Adding the alter action” on page 410
v “Adding the folder to hold multiple objects in the tree” on page 403
v “Adding the remove action with multiple selection support” on page 409
v “Setting attributes for a plugin tree object” on page 406
Now that your users can create as many instances of your plugin as they want,
you might want to give them the ability to delete as well. First, add a method to
Example3Folder to remove the child and notify the Control Center.
public class Example3Folder extends Observable implements CCTreeObject {
The next step is to add a menu action to the Example3Child. We will make this
CCMenuAction implement MultiSelectable so that your users can remove multiple
objects at the same time. Since the source of this action will be a Vector of
Example3Child objects rather than an Example3Folder, the Example3Folder should
be passed in to the menu action some other way, such as in the constructor.
class RemoveAction implements CCMenuAction, MultiSelectable {
private Example3Folder folder;
Related concepts:
v “Compiling and running the example plugins” on page 396
Related tasks:
v “Adding an example object under the folder” on page 405
v “Adding the alter action” on page 410
v “Adding the create action” on page 407
v “Adding the folder to hold multiple objects in the tree” on page 403
v “Setting attributes for a plugin tree object” on page 406
The final type of event the Control Center listens to with respect to plugins is the
OBJECT_ALTERED event. We created a “State” column in a previous example so
that this feature could be demonstrated in this example. We will increment the
state value when the Alter action is invoked.
The first step is to write a method to change the state, but this time it will be on
the Example3Child rather than the folder. In this case, both the first and third
arguments are the Example3Child. Remember to extend Observable.
class Example3Child extends Observable implements CCTableObject {
...
public void setState(String state) {
this.state = state;
setChanged();
notifyObservers(new CCObjectCollectionEvent(this,
CCObjectCollectionEvent.OBJECT_ALTERED, this));
}
...
}
Next, create a menu action for Alter and add it to the CCMenuAction array in
Example3Child. The AlterAction class also implements the CCDefaultMenuAction
Related concepts:
v “Compiling and running the example plugins” on page 396
Related tasks:
v “Adding an example object under the folder” on page 405
v “Adding the create action” on page 407
v “Adding the folder to hold multiple objects in the tree” on page 403
v “Adding the remove action with multiple selection support” on page 409
v “Setting attributes for a plugin tree object” on page 406
License Center
This section describes how to use the License Center, including how to add,
change and remove licenses. It also describes how to view license information.
Tasks:
v Adding licenses
v Changing licenses and policies
v Viewing licensing information
v Viewing license policy information
v Viewing authorized user infraction information
v Viewing and resetting compliance details
v Removing licenses
The License Center interface has two elements that help you add and manage
licenses.
Menu bar
Use the toolbar icons below the menu items to access other DB2
administration tools. For more information, see DB2 toolbar.
Related concepts:
v “Control Center overview” on page 376
v “License management” on page 64
Related tasks:
v “Adding licenses” on page 412
v “Changing licenses and policies” on page 413
v “Viewing license policy information” on page 414
v “Viewing licensing information” on page 413
v “Removing licenses” on page 416
v “Viewing and resetting compliance details” on page 415
v “Viewing authorized user infraction information” on page 415
Adding licenses
From the License Center, use the Add License window to add new licenses.
Procedure:
Related concepts:
v “License Center overview” on page 411
Prerequisites:
To modify license policies in the License Center, you need SYSADM authority on
the DB2 instance that contains the installed license.
Note: If you do not have SYSADM authority, the Select Instance window
automatically displays, from which you can select an instance where you
have SYSADM authority.
Procedure:
Related concepts:
v “License Center overview” on page 411
Procedure:
Related concepts:
v “License Center overview” on page 411
Related tasks:
v “Viewing and resetting compliance details” on page 415
v “Viewing license policy information” on page 414
v “Viewing authorized user infraction information” on page 415
Procedure:
Related concepts:
v “License Center overview” on page 411
Related tasks:
v “Viewing licensing information” on page 413
v “Viewing and resetting compliance details” on page 415
v “Viewing authorized user infraction information” on page 415
The Statistics page is available only when a User-based policy is enabled and the
selected system and selected product have been used for DB2 activities. Statistics
are generated during connects and disconnects after the database manager has
been restarted.
Procedure:
1. Open the License Center: Click the icon in the Control Center.
2. On the Statistics page, if enabled, view information about licensed and
non-licensed users.
Related concepts:
v “License Center overview” on page 411
Related tasks:
v “Viewing and resetting compliance details” on page 415
v “Viewing license policy information” on page 414
v “Viewing licensing information” on page 413
Procedure:
Note: The reset option resets all license usage information for all products and
instances installed within the selected install path. You cannot reset
usage information selectively for a product or for a particular feature.
Related concepts:
v “License Center overview” on page 411
Related tasks:
v “Viewing authorized user infraction information” on page 415
v “Viewing license policy information” on page 414
v “Viewing licensing information” on page 413
Note: If you do not have SYSADM authority, the Select Instance window
automatically displays, from which you can select an instance where you
have SYSADM authority .
To remove DB2 licenses, you need SYSADM authority on the DB2 instance on
which the license is installed.
To remove a license:
Related concepts:
v “License Center overview” on page 411
You can also create grouping tasks to define actions based on the results of
multiple tasks. Grouping tasks are unlike other tasks in the Task Center, because
no command script is directly associated with a grouping task. Instead, a grouping
task contains tasks that are already defined to the Task Center. The advantage of
creating a grouping task is to create task actions that depend on the results of more
than one task.
Task schedules are managed by a scheduler. The tasks are run on one or more
systems, known as run systems. You define the conditions for a task to fail or
succeed with a success code set. Based on the success or failure of a task, or group
of tasks, you can run additional tasks, disable scheduled tasks, and perform related
actions. You can also define notifications to send after a task completes. You can
send an e-mail notification to people in your contacts list, or you can send a
notification to the Journal.
From the Task Center, you can also open other centers and tools to help you with
other administrative tasks.
Prerequisites:
To use the Task Center, you must select a scheduler system that will work with the
Task Center. The Task Center uses the system clock of the scheduler to determine
when to start tasks. To select a scheduler system, from the Task Center, select a
system in the Scheduler System field.
When you log on to the Task Center, you are logging on to the scheduler that you
select. You must log on every time you start the Task Center.
To grant or revoke privileges for a task, you must be the creator of the task. To
create, alter, or delete a task, you must have write authority for the task. To run a
task, you must have run authority for the task.
Tasks:
You can perform the following tasks from the Task Center:
v Create or edit tasks
v Run tasks immediately
v Manage contacts
v Manage task categories
v Manage saved schedules
v Manage success code sets
v Change the default notification message
The Task Center interface consists of three elements that help you to customize
your view of the list of tasks and to navigate the Task Center efficiently.
Menu bar
Use the menu bar to work with objects in the Task Center, open other
administration centers and tools, and access online help.
Contents pane
Click to open the Task Center. Use the contents pane to display and
work with system and database objects. The contents pane displays the
tasks that are in the current view.
Contents pane toolbar
Use the toolbar below the contents pane to tailor the view of tasks in the
contents pane to suit your needs. You can also select these toolbar
functions in the Edit menu and the View menu.
Accessing custom controls with the keyboard
You can use the keyboard to access controls found on the graphical user
v For the ellipsis , press Tab until the button is selected; then press
Enter.
v For the Date field, press Tab until the field is selected; then type the date
in the field.
Related concepts:
v “Control Center overview” on page 376
v “Journal overview” on page 418
Related tasks:
v “Changing the default notification message” on page 423
v “Creating or editing a task” on page 425
v “Managing contacts” on page 428
v “Managing saved schedules” on page 429
v “Managing success code sets” on page 430
v “Managing task categories” on page 431
v “Running tasks immediately” on page 421
Journal overview
Use the Journal notebook to view historical information about tasks, database
actions and operations, messages, and notifications. The Journal is the focal point
for viewing all available historical information generated within the Control
Center, as compared to the Show Result option from the Task Center, which shows
only the latest execution results of a task.
To sort the records shown in each of the notebook pages, click the column
headings.
Prerequisites:
To access the Journal, you must have access to the DB2 tools catalog database.
Use this page to view the task history records for each of the available scheduler
systems and to analyze the execution results. For example:
v You might want to examine the status of weekly backup tasks.
v You might want to get the execution results of a specific execution of a task,
such as a task that runs periodically to capture a snapshot of a database system.
The results for each execution of this task can be viewed in the Journal.
From the Refresh options field, select the amount of time between automatic page
refreshes. The default option is No automatic refresh.
To delete records, highlight the records that you want to delete, right-click and
select Delete from the pop-up menu.
Use this page to view historical records of database recovery for each of the
databases in the drop-down list. Click to select a system, instance, and
database. For partitioned environments, you must also select a database partition.
Messages page:
Use this page to view the message records issued by the DB2 administration tools
on the local system. To delete the message records, highlight the records that you
want to delete, right-click and select Delete from the pop-up menu. Alternatively,
you can use the Selected menu to remove only the selected records or all records.
Use this page to view the notification log records for the selected instance. You can
customize the filtering options and criteria. The default is to display the last 50
records of all notification types. If you select either Read from specified record to
end of the file or Read records from specified range, and if the settings are set to
overwrite old records, the log record numbers are not reused. Therefore, selecting
Start record 1 and End record 100 does not guarantee seeing anything in the
notification log if the log has been looping. Note that the columns and column
headings change depending on your selection.
Related concepts:
v “Task Center overview” on page 416
v “Health Center overview” in System Monitor Guide and Reference
v “Control Center overview” on page 376
Related tasks:
v “Creating a database for the DB2 tools catalog” on page 424
v “Tools catalog database and DB2 administration server (DAS) scheduler setup
and configuration” on page 96
Click on the Control Center toolbar to open the Task Center to view the
current settings.
Procedure:
You can use the Scheduler Settings page of the Tools Settings notebook to set the
default scheduling scheme. Note that if you set a default scheduling scheme, you
can still override it at the task level.
Related tasks:
v “Creating a database for the DB2 tools catalog” on page 424
v “Scheduling a task” on page 422
v “Setting the default scheduling scheme” on page 449
v “Tools catalog database and DB2 administration server (DAS) scheduler setup
and configuration” on page 96
Scheduler
The scheduler is a DB2 system that manages tasks. This component of the DB2
Administration Server (DAS) includes the tools catalog database, which contains
information that the Task Center uses. When you schedule a task, the Task Center
uses the system clock of the scheduler to track when the tasks on the scheduler
need to run.
The Task Center displays the list of cataloged systems or databases that have active
schedulers. You must select a scheduler system to work with the Task Center.
When you log on to the Task Center, you are logging on to the scheduler system
that you select. You must log on every time you start the Task Center.
Related concepts:
v “Task Center overview” on page 416
Related tasks:
v “Scheduling a task” on page 422
v “Enabling scheduling settings in the Task Center” on page 419
v “Setting the default scheduling scheme” on page 449
The Task Center evaluates the success of every statement of a DB2 script. If any
statement fails, the entire task fails.
If you do not specify a success code set, a return code of 0 is considered a success;
all others are failures. The following rules apply when you specify a success code
set:
v The success code set can only have one greater than (>) condition, where the
associated code must be greater than or equal to (>=) any less than (<) condition
that is specified. For example, if you specify (> 5) or (< 0), the error codes 0, 1, 2,
3, 4, and 5 mean the task failed. You cannot specify (> 5) or (< 6), as this
includes all numbers.
v The success code set can only have one less than (<) condition, where the
associated code must be less than or equal to (<=) any greater than (>) condition
Assume that you want to run a DB2 script. Also assume that this DB2 script
consists of more than one SQL or XQuery statement (that is, the script contains
multiple lines), and you know that each statement in the script returns an
SQLCODE. Because some SQL or XQuery statements can return non-zero
SQLCODES that do not represent error states, you must determine the set of
non-error SQLCODES that any of the SQL statements in the script can return. For
example, assume that the following return codes all indicate successful execution
of the SQL or XQuery statements in the script. That is, if any of the following
conditions are met, execution of the script continues:
RC > 0 OR RC = -1230 OR RC = -2000
You would define the success code set as shown in Table 20:
Table 20. Example of a success code set
Condition SQLCODE
> 0
= 0
= -1230
= -2000
Related tasks:
v “Managing success code sets” on page 430
Prerequisites:
To run a task, you must have run authority for the task.
Procedure:
1. Open the Run Now window:Click on the Control Center toolbar to open
the Task Center. In the Task Center, select one or more tasks and click
Selected–>Run Now. The Run Now window opens.
2. Click Use notifications to use the notifications for the task.
3. Click Use task actions to use the task actions for the task.
4. If you are running an OS script task:
a. Include the parameters specified on the Run properties page of the Task
Properties notebook.
b. Specify parameters to send.
Related concepts:
v “Task Center overview” on page 416
v “Tasks and required authorizations” on page 608
Related tasks:
v “Enabling scheduling settings in the Task Center” on page 419
v “Selecting users and groups for new tasks” on page 427
Scheduling a task
Whenever you create a task, you have the option of running it immediately or
scheduling it to run later. For the latter, the script is saved in the Task Center, and
all execution information is automatically saved in the Journal.
Procedure:
Use the Schedule page of various wizards and notebooks to indicate whether you
want to run a selected task immediately, or schedule it to run later:
v To run the task immediately, without creating a task in the Task Center or saving
the task history to the Journal, select Run now without saving task history.
v To create a task for generating the DDL script and saving it in the Task Center,
select Create this as a task in the Task Center. Then, specify the task
information and options:
– Specify the name of the system on which you want to run the task. This
system must be online at the time the task is scheduled to run.
– Select the system where you want to store the task and the schedule
information, in the Scheduler system drop-down box.
This system will store the task and notify the run system when it is time to
run the task. The drop-down list contains any system that is cataloged and
has the scheduler enabled. The scheduler system must be online so that it can
notify the run system.
If the DB2 tools catalog is on a remote system, you will be asked for a user ID
and password in order to connect to the database.
– Optional: If you want to select a different scheduling scheme, click Advanced.
The Advanced Schedule Settings window opens where you can select
between server scheduling or centralized scheduling.
– To save the task in the Task Center, but not actually run the task, select Save
task only.
– To save the task in the Task Center and run the task now, select Save and run
task now.
– To save the task to the Task Center, and schedule a date and time to run the
task later, specify Schedule task execution. The Change button is enabled.
Click Change. A window opens where you can enter the date and time at
which to run the task.
The Details group box displays the schedule information you selected.
Related concepts:
v “Scheduler” on page 420
v “Task Center overview” on page 416
Related tasks:
v “Setting the default scheduling scheme” on page 449
v “Tools catalog database and DB2 administration server (DAS) scheduler setup
and configuration” on page 96
Procedure:
1. Open Edit Message window. To open the Edit Message window, click in
the Control Center. The Task Center opens. In the Task Center, click Task–>Set
Default Notification Text. The Edit Message window opens.
2. Type the name or ID to identify the sender of the message. DB2 appends the @
hostname to the specified name or ID. The sender is the name that the e-mail
message reports as the person who sent the message.
3. Type the subject line and text of the e-mail message. You can use the tokens in
Table 21, which the Task Center recognizes and replaces with actual values in
the e-mail message.
Table 21. Tokens for subject lines and e-mail messages
Token Description
&Categories The categories associated with the task.
&Completionstatus The completion status of the task. This value depends on the
success code set associated with the task.
&Description The description of the task.
&Duration The length of time that the run system took to complete the
task from start to finish.
&End The date and time when the task completed.
&Howinvoked The method used to invoke the task.
&Name The name of the task.
&Owner The name of the owner of the task.
&Returncode The final return code of the task.
&Runpartitions The partitions on which the task ran.
&Runsystem The name of the system on which the task ran.
&Schedulersystem The name of the system on which the task is scheduled.
&Start The date and time when the task began running.
Related concepts:
v “Task Center overview” on page 416
Procedure:
v Open the Create New Tools Catalog window: Click Create New from the
Enabling Scheduling Function group box of your current window, wizard, or
notebook. For instance, click Create New from the Schedule page of the Tools
Settings notebook.
v Select the instance where the tools catalog is to be created, and type the tools
catalog schema name.
v Specify whether to create a new database or use an existing one.
v Select whether to force all applications for instance restart and whether to
activate the tools catalog when it is created.
Troubleshooting tips:
If you receive a -567 return code when trying to create a new tools catalog, try
running the db2admin setid command and stop and start the DAS. This error
indicates that an invalid user ID or password has been submitted.
For more information on how to stop and start the DAS, see:
v dasdrop - Remove a DB2 Administration Server Command
v dasauto - Autostart DB2 Administration Server Command
Related concepts:
v “Tools catalog” in Administration Guide: Planning
Grouping tasks are unlike other tasks in the Task Center, because no command
script is directly associated with a grouping task. Instead, a grouping task contains
tasks that are already defined to the Task Center. The advantage of creating a
grouping task is to create task actions that depend on the results of more than one
task. For example, you can place three backup tasks in a grouping task, then run a
reorganization task only if all three backup tasks are successful. If any of the tasks
in the grouping task fails, the grouping task is considered a failure.
Prerequisites:
Before creating a task, ensure that you have specified a scheduler system. The Task
Center uses the system clock of the scheduler to determine when to start tasks.
When you log on to the Task Center, you are logging on to the scheduler that you
select. You must log on every time you start the Task Center.
To create or edit a task, you must have write authority for the task.
Procedure:
1. Open the New Task notebook: Click on the Control Center toolbar to
open the Task Center. In the Task Center, select Task–>New, or right-click
anywhere in the task details view, and click New. The New Task notebook
opens.
2. Select the type of task to create:
v DB2 command script if the script contains DB2 commands
v OS command script if the script contains operating system commands
v MVS shell script if the script contains MVS commands to be run in a host
environment, such as z/OS
v Grouping task to place multiple tasks into the grouping task.
3. Optional: Select a task category. Categorizing tasks helps keep your list of
tasks organized.
4. Select the system on which the task will run.
5. Specify the DB2 instance where the script will run. If the task will run on
multiple DB2 partitions, select the partitions on which the task will run.
6. Refer to the appropriate path in this step for the type of task that you are
creating, based on your selection in the Type field:
1. Open the New Task notebook: Click on the Control Center toolbar to open
the Task Center. In the Task Center, select Task–>New, or right-click anywhere
in the task details view, and click New. The New Task notebook opens.
2. On the Group page, select the tasks to include in the grouping task. Use the
arrow buttons to move them to the selected tasks. The selected tasks are
members of the group.
1. Open the Task Center: Click on the Control Center toolbar. The Task
Center opens.
2. Click Task–>Show Progress. The Show Progress window opens, in which you
can view statistics and the status of completed tasks.
Related concepts:
v “Scheduler” on page 420
Related tasks:
v “Running tasks immediately” on page 421
Related concepts:
v “Control Center overview” on page 376
Related tasks:
v “Granting database authorities to new groups” on page 529
v “Granting database authorities to new users” on page 529
Managing contacts
Contacts are records of names and e-mail addresses that are stored in the Task
Center. You use and manage the list of contacts like an address book. You can also
create groups of contacts, which makes it easier to manage notification lists
because you only need to update the group definition once to change the
notification list for all tasks. When a notice is sent to the group, each member of
the group receives the notice. Contact groups can include other contact groups.
Other DB2 tools, such as the Health Center, can also use this contacts list. You can
select one or more contacts from the list to receive e-mail notifications about a task
after it completes.
Restrictions:
Procedure:
1. To specify a valid SMTP server for sending e-mail messages:
a. On the windows specified in following steps, click the SMTP server push
button, and specify a valid SMTP server on the window that opens.
2. To add a contact:
Related concepts:
v “Task Center overview” on page 416
Prerequisites:
To create, alter, or delete a saved schedule, you must have write authority for the
saved schedule.
Procedure:
Related tasks:
v “Creating or editing a task” on page 425
All Task Center users can create, alter, or delete success code sets.
Procedure:
Related concepts:
v “Success code sets” on page 420
v “Task Center overview” on page 416
Procedure:
1. Open the Task Categories window: Click on the Control Center toolbar to
open the Task Center. In the Task Center, click Task–>Task Categories. The
Task Categories window opens.
2. To add a task category:
a. Open the Add Task Category window: In the Task Categories window, click
Add. The Add Task Category window opens.
b. Type the name of the category, and optionally, a description of this category.
3. To change a task category:
a. In the Task Categories window, select a task category to change. Click
Change. The Change Task Category window opens.
b. Make your changes to the name or description of the task category.
4. To select a task category:
Related concepts:
v “Task Center overview” on page 416
Tools Settings
This section describes how to use the Tools Settings notebook to set up various
properties and options. It also describes how to set up the startup and default
options for the DB2 administration tools.
General page:
Use this page specify whether the local DB2 instance should be automatically
started when the DB2 tools are started, whether to use a statement termination
character, and whether to set filtering when the maximum number of rows is
exceeded from a display sample contents request.
For more information, see Setting the server administration tools startup property.
Documentation page:
Use this page to specify whether hover help and infopop help features in the DB2
administration tools should display automatically, and also to specify the location
from which the contextual help is accessed at the instance level.
For more information, see Setting up access to DB2 contextual help and
documentation.
Fonts page:
Note: Some changes will not take effect until the Control Center is restarted. If you
have chosen a font color that will not show up on the background color on
your system, DB2 will temporarily override the font color that you have
chosen and select a font color that will show up. This system override will
not be saved as part of your user profile.
For more information, see Changing the fonts for menus and text.
Use this page to set column headings and define the online and batch utility
execution options for OS/390 and z/OS objects. Defaults are provided for some of
the options. For more information, see ″Estimating column sizes″ in Setting DB2
UDB OS/390 and z/OS utility execution options.
For the Optimize grouping of objects for parallel utility execution option, see
Example 1 for online and Example 2 for batch. If this option is not selected, objects
are grouped according to the order in which they were selected, with the
maximum number of objects in each group. See Example 1 and Example 2.
For the Specify the Maximum number of objects to process in parallel for online
execution option, see Example 1. For the Maximum number of jobs to run in
parallel for batch execution and Maximum number of objects per batch job
options, see Example 2.
For more information, see Setting DB2 UDB OS/390 and z/OS utility execution
options.
Use this page to specify the type of notification you will receive when an alert is
generated in the Health Monitor. You can be notified through a pop-up message or
with the graphical beacon that displays on the lower-right portion of the status line
for each DB2 center, or using both methods of notification.
For more information, see Enabling or disabling notification using the Health
Center Status Beacon.
Use this page to set the default scheduling scheme. Select Server Scheduling if
you want task scheduling to be handled by the scheduler that is local to the
database server, if the scheduler is enabled on that system. Select Centralized
Scheduling if you want the storage and scheduling of tasks to be handled by a
centralized system, in which case you need to select the centralized system from
the Centralized Scheduler list. To enable another scheduler, select a system and
click Create New to open a window in which you can create a database for the
DB2 Tools Catalog on a cataloged system. If the system you want is not cataloged,
you must catalog it first.
Use this page to specify how you will generate, edit, execute, and manipulate SQL
and XQuery statements, IMS commands, and DB2 commands and work with the
resulting output. These settings affect commands, SQL statements and XQuery
statements on DB2 databases, z/OS and OS/390 systems and subsystems, and
IMSysplexes.
IMS page:
Use this page to set your preferences when working with IMS. You can set
preferences for using wizards, syntax support, results, and the length of your
command history.
Related concepts:
v “Features of the DB2 Information Center” in Online DB2 Information Center
On the General page, select the Automatically start local DB2 on tools startup
check box.
Related tasks:
v “Setting startup and default options for the DB2 administration tools” on page
436
v “Starting and stopping the DB2 administration server (DAS)” on page 94
v “Starting the server DB2 administration tools” on page 369
Note: If you specify a statement termination character, you cannot use the
backslash (\) character to continue statements in command scripts.
On the General page, select the Use statement termination character check box.
Optional: Type a character that will be used as the statement termination character
in the entry field. The default character is a semicolon (;).
Related concepts:
v “Command Editor overview” in Online DB2 Information Center
Related tasks:
v “Executing commands and SQL statements using the Command Editor” in
Online DB2 Information Center
v “Setting Command Editor options” on page 449
To open the Tools Settings notebook, click on the DB2 toolbar. Click the
Documentation tab.
To indicate whether hover help will be automatically displayed, select or clear the
Automatically display hover help check box. The default setting is for the hover
help to be automatically displayed.
In the Documentation location fields, specify for this instance where to access the
DB2 Information Center from:
v To access the DB2 Information Center on the IBM Web site, use the default
values.
v To access the DB2 Information Center installed on an intranet server, or on your
own computer, specify the host name and port number of the server or
computer.
Important:
The documentation location values that you specify on this page update
the DB2 DB2_DOCHOST and DB2_DOCPORT profile registry variables
that control how requests for DB2 documentation are handled for this
instance only. If you want to change the settings for all instances on this
computer, or if you want to change them for a single user session,
follow the instructions in Setting the location for accessing the DB2
Information Center.
To have the Documentation location values take effect, including
resetting the default values, click Set and restart the center in which you
are working.
Related concepts:
Related tasks:
v “Setting the location for accessing the DB2 Information Center” in Troubleshooting
Guide
v “Declaring, showing, changing, resetting, and deleting registry and environment
variables” on page 68
v “Troubleshooting problems with the DB2 Information Center running on local
computer” in Troubleshooting Guide
Related reference:
v “Miscellaneous variables” in Performance Guide
Click Customize the Control Center push button to switch between the basic,
advanced, or custom views .
Related tasks:
v “Finding service level information about the DB2 administration tools
environment” on page 370
On the Fonts page, select the font size and color in which you want the menus and
text in the DB2 administration tools to appear.
Troubleshooting tips:
v Some changes will not take effect until the Control Center is restarted.
v If you have chosen a font color that will not show up on the background color
on your system, DB2 will temporarily override the font color that you have
chosen and select a font color that will show up. This system override will not
be saved as part of your user profile.
Related reference:
v “DB2 Help menu” on page 375
v “DB2 Tools menu” on page 374
On the OS/390 and z/OS page, select the Use system catalog column names as
column headings check box to match the column headings in the contents pane of
the Control Center, if applicable, to the column names defined in the system
catalogs of DB2 UDB for OS/390 or z/OS. For derived columns, that is, columns of
which the values are not selected directly from the system catalog, this option will
not have any effect. If you do not select this option, all the column headings in the
contents pane will be displayed in translated form in the current language
selection.
Optional: Select the Edit options each time utility runs check box to have the
opportunity to modify the utility execution options each time a utility is executed.
The Online execution utility ID template field shows the current template for
identifying DB2 for OS/390 and z/OS utilities. You can type an identifier, keep the
displayed value, or open a Change Online Execution Utility ID Template window
to select from a list of symbolic variables by clicking . The template can consist
of literals and symbolic names. Symbolic names start with an ampersand (&) and
end in a period (.).
The resolved length of the utility identifier can be no longer than 16 characters. If
you do not create your own identifier, the Control Center generates a default
utility ID from the date and timestamp. The default format is CCMMddhhmmss,
Optional: For online, select Continue online execution or batch job if error is
encountered if you want any of the parallel threads started by the Control Center
to start execution of a utility against an unprocessed object. This would occur if
executing a utility in any concurrently running thread, or in the same thread,
resulted in an error (a DSNUTILS return code of 8). If not selected, no more calls
will be made to DSNUTILS once an error is found in any thread.
For batch, select Continue online execution or batch job if error is encountered if
you want the next step of a job generated by the Build JCL or Create JCL function
to be executed if the step immediately before has returned an error executing an
utility (a return code of 8). Unlike with online execution, there exists no
dependency between jobs (the next job with the same jobname would also start
regardless of an error in the previous job with the same jobname). If not selected,
the job generated by the Build JCL or Create JCL function will terminate when one
of the steps executing a utility has returned an error (a return code of 8 or higher).
Optional: For online, select Optimize grouping of objects for parallel utility
execution if you want the set of objects to be grouped into a number of parallel
threads that are constrained by the setting Maximum number of objects to
process in parallel for online execution. With this setting, you can minimize the
overall execution time, use fewer parallel threads to achieve the shortest overall
processing time, and optimize usage of system resource. See Example 1 below.
For batch, select Optimize grouping of objects for parallel utility execution if you
want the set of objects to be grouped into a number of parallel threads (jobs) that
are constrained by the setting Maximum number of jobs to run in parallel for
batch execution. With this setting, you can minimize the overall execution time,
use the fewest concurrent jobs to achieve the shortest overall processing time, and
optimize usage of system resource. The maximum number of steps (executions of
the utility) per job is limited by the setting Maximum number of objects per batch
job. See Example 2 below.
If this option is not selected, objects are grouped according to the order in which
they were selected, with the maximum number of objects in each group. See
Example 1 and Example 2 below.
Specify the Maximum number of jobs to run in parallel for batch execution. The
default is 10, the maximum value allowed is 99. This number is used as the
maximum when using the optimizer to group objects, and applies only to batch,
not to online execution. If 1 is specified, then objects are not processed in parallel,
but instead are processed sequentially. If optimization has not been selected, then
this value specifies exactly how many concurrent batch jobs there will be. See
Example 2 below.
RUNSTATS is requested to be run against a set of index objects (IX1, IX2, IX3
PART 1, IX3 PART2) where IX3 is a partitioned index and the Maximum number
of objects to process in parallel for online execution is set to 4.
The optimizer estimates that RUNSTATS on IX1 takes 10 times longer than on all
other objects.
When optimization is selected
If optimization is enabled, the optimizer would only come up with 2
threads:
More threads would not result in a shorter overall execution, since Thread
1 will take longer than Thread 2.
When optimization is not selected
If optimization is disabled, 4 threads would be used in this example.
Example 2: Batch execution: How objects are assigned to jobs and job steps:
RUNSTATS is requested to be run against a set of index objects (IX1, IX2, IX3
PART 1, IX3 PART2, IX4, IX5, IX6, IX7, IX8, IX9, IX10) where IX3 is a partitioned
index and the Maximum number of jobs to run in parallel for batch execution is
set to 2 and the Maximum number of objects per batch job is set to 3.
The optimizer estimates that RUNSTATS on IX1 takes 10 times longer than on all
other objects.
When optimization is selected
The following JCL would be created for the user:
//JOB1 JOB....
//STEP1 EXEC...
//..... RUNSTATS INDEX IX1
//JOB2 JOB.... //STEP1 EXEC...
//..... RUNSTATS INDEX IX2
//STEP2 EXEC...
//..... RUNSTATS INDEX IX3 PART 1
//STEP3 EXEC...
//..... RUNSTATS INDEX IX3 PART 2
//JOB2 JOB.... //STEP1 EXEC...
//..... RUNSTATS INDEX IX4
//STEP2 EXEC...
//..... RUNSTATS INDEX IX5
//STEP3 EXEC...
//..... RUNSTATS INDEX IX6
//JOB1 JOB....
//STEP1 EXEC...
//..... RUNSTATS INDEX IX1
//STEP2 EXEC...
//..... RUNSTATS INDEX IX3 PART 1
//STEP3 EXEC...
//..... RUNSTATS INDEX IX4
//JOB2 JOB....
//STEP1 EXEC...
//..... RUNSTATS INDEX IX2
//STEP2 EXEC...
//..... RUNSTATS INDEX IX3 PART 2
//STEP3 EXEC...
//..... RUNSTATS INDEX IX5
//JOB1 JOB....
//STEP1 EXEC...
//..... RUNSTATS INDEX IX6
//STEP2 EXEC...
//..... RUNSTATS INDEX IX8
//STEP3 EXEC...
//..... RUNSTATS INDEX IX10
//JOB2 JOB....
//STEP1 EXEC...
//..... RUNSTATS INDEX IX7
//STEP2 EXEC...
//..... RUNSTATS INDEX IX9
Only the last set of jobs will be created with less than the maximum
number of objects allowed. The list of objects will be assigned sequentially,
alternating by jobs and steps, as shown in the example above.
Related tasks:
v “Adding DB2 UDB for z/OS subsystems to the object tree” on page 389
The DB2 UDB for z/OS health monitor triggers the evaluation of object
maintenance policies at scheduled times and intervals, as defined in the policy. The
object maintenance policies are created using the DB2 Control Center’s Create
Object Maintenance Policy wizard. During each policy evaluation, the criteria for
recommending maintenance is checked against the thresholds set in the object
maintenance policy to determine the need for object maintenance, that is, whether
COPY, REORG, RUNSTATS, STOSPACE, ALTER TABLESPACE, or ALTER INDEX
are required, and to identify restricted states, such as CHKP, on table space, index,
and storage group objects where applicable. When objects are identified to be in
alert state during policy evaluation,the policy health alert contacts are notified at
their e-mail addresses or pager numbers. The list of health alert contacts for each
DB2 subsystem is defined in and managed from the Control Center.
A snapshot of the evaluation schedule for the policies, which is used by the health
monitor to determine when to trigger policy evaluations, is initially taken by the
health monitor when it is started. This schedule snapshot is refreshed at the refresh
time specified when the health monitor was started, or when the health monitor
receives a refresh command. Any change to the evaluation schedule of a policy is
picked up by the health monitor when the schedule refresh occurs.
The health monitor is started and stopped from the console, using the MVS system
START and STOP commands, respectively.
Policy evaluations triggered by the DB2 health monitor are logged in the table
DSNACC.HM_EVAL_LOG. An entry is logged when a policy evaluation starts and
when a policy evaluation ends. Log entries are kept for 7 days, after which they
will be deleted from the table. The DB2 view DSNACC.HM_ALERT_PO_EV, which
was created on this table by the DSNTIJCC installation job, can be used to display
all policies whose last evaluation iteration was not successful.
Related tasks:
Starting, stopping and refreshing the DB2 UDB for z/OS health
monitor
On the z/OS system, the DB2 UDB for z/OS health monitor is started as a task for
each DB2 subsystem to be monitored or on a dedicated member of a data sharing
group.
v To start a DB2 health monitor, issue the following START MVS system
command:
S membername,DB2SSN=ssid,JOBNAME=HMONssid,TRACE=trace,REFRESH=nn
Note: Before starting multiple DB2 health monitors with one START command
using DSNHMONA, the HMONPARM data set specified in the
DSNHMONA proc must be populated with the list of subsystems to be
monitored. The cataloged procedure and the data set are created by the
DSNTIJHM installation job.
v To refresh the policy evaluation schedule snapshot used by the DB2 health
monitor to determine when to trigger policy evaluations, issue the following
MODIFY MVS system command:
F HMONssid,APPL=REFRESH
ssid
Related concepts:
v “DB2 UDB for z/OS health monitor overview” on page 441
Related tasks:
v “Viewing health alert summaries” on page 446
v “Viewing health alert objects” on page 447
v “Viewing, submitting, and saving recommended actions” on page 443
The following syntax diagram shows the SQL CALL statement for invoking
DSNACCHR. Because the linkage convention for DSNACCHR is GENERAL WITH
NULLS, if you pass parameters in host variables, you need to include a null
indicator with every host variable. Null indicators for input host variables must be
initialized before you execute the CALL statement.
Syntax:
return-code, error-msg )
query-type
Specifies what you want to do with the actions recommended for objects identified
to be in alert state during policy evaluation. Possible values are:
v 0 - View recommended actions on alert objects as a JCL job
v 1 - Submit the JCL job that executes the recommended actions on alert objects
v 2 - Submit the JCL job that executes the recommended actions on alert objects,
and put the job on the hold queue
v 3 Save recommended actions on alert objects as a JCL job in a library member
query-type is an input parameter of type INTEGER.
Specifies the type of alert that DSNACCHR includes in the JCL job. Possible values
are:
v RS - Restricted State
v EX - Extents Exceeded
v RR - REORG Required
v CR - COPY Required
v RT - RUNSTATS Required
v SS - STOSPACE Required
policy-id
work-set
Specifies the work set of an object maintenance policy that identified the alert
objects that DSNACCHR includes in the JCL job. This work set must be identified
with the policy and type of alert specified in the parameters policy-id and health-ind.
work-set is an input parameter of type INTEGER.
dataset-name
Specifies a fully qualified partitioned data set (PDS) or partitioned data set
extended (PDSE) name. This value must be specified if query-type is 3. dataset-name
is an input parameter of type VARCHAR(44).
member-name
Specifies a member of the partitioned data set (PDS) or partitioned data set
extended (PDSE) specified in the dataset-name parameter where the object
maintenance JCL job will be saved.This value must be specified if query-type is 3.
member-name is an input parameter of type VARCHAR(8).
save-opt
Specifies how to save the object maintenance JCL job. This value must be specified
if query-type is 3. Possible values are:
v R - Replace
v A - Append
v NM - New member
save-opt is an input parameter of type VARCHAR(2).
trace-flag
job-ID
jobname
jcl-proc-time
last-statement
When DSNACCHR returns a severe error (return code 12), this field contains the
SQL statement that was executing when the error occurred. last-statement is an
output parameter of type VARCHAR(2500).
return-code
error-msg
When DSNACCHR returns a severe error (return code 12), this field contains error
messages, including the formatted SQLCA. error-msg is an output parameter of
type VARCHAR(1331).
DSNACCHR returns one result set when the query-type parameter is 0. The result
set contains the JCL job generated by DSNACCHR. The DSNACCHR result set
table is created by the DSNTIJCC installation job. Table 22 shows the format of the
result set.
Table 22. DSNACCHR result set format
Column name Data type Description
JCLSEQNO INTEGER Sequence number of the table row
(1,...,n)
JCLSTMT VARCHAR(80) Specifies a JCL statement
Related concepts:
v “DB2 UDB for z/OS health monitor overview” on page 441
Related tasks:
The result of the function is a DB2 table with the following columns:
ip-addr
db2-ssid
health-ind
host-name
The fully qualified domain name of the DB2 server. This is a column of type
VARCHAR(255).
summary-stats
The state of the DB2 health monitor if health-ind is ’HM’. Possible values are:
v 0 Health monitor is not started
v 1 Health monitor is started
v -1 Health monitor state is unknown
Otherwise, the total number of alert objects with the alert type specified in
health-ind. This is a column of type INTEGER.
The external program name for the function is HEALTH_OVERVIEW, and the
specific name is DSNACC.DSNACCHO. This function is created by the DSNTIJCC
installation job.
Example: Find the total number of alert objects requiring COPY for the DB2
subsystem ’ABCD’:
SELECT SUMMARYSTATS FROM TABLE (DSNACC.HEALTH_OVERVIEW()) AS T
WHERE DB2SSID = ’ABCD’
AND HEALTHIND = ’CR’;
Related concepts:
v “DB2 UDB for z/OS health monitor overview” on page 441
Related tasks:
v “Viewing health alert objects” on page 447
v “Starting, stopping and refreshing the DB2 UDB for z/OS health monitor” on
page 442
v “Viewing, submitting, and saving recommended actions” on page 443
DB2 creates a number of views on these alert object repository tables. The views
and alert object repository tables are created by the DSNTIJCC installation job.
Table 23 lists the tables on which each view is defined and the view descriptions.
All view names and table names have the qualifier DSNACC.
Table 23. Views on health alert objects
View Name On Table View Description
HM_ALERT_TS_RS HM_MAINT_TS Displays all table spaces in restricted state
HM_ALERT_TS_EX HM_MAINT_TS Displays all table spaces whose extents have
exceeded a user-specified limit
HM_ALERT_TS_RR HM_MAINT_TS Displays all table spaces that require
REORG
HM_ALERT_TS_CR HM_MAINT_TS Displays all table spaces that require COPY
HM_ALERT_TS_RT HM_MAINT_TS Displays all table spaces that require
RUNSTATS
Related concepts:
v “DB2 UDB for z/OS health monitor overview” on page 441
Related tasks:
v “Starting, stopping and refreshing the DB2 UDB for z/OS health monitor” on
page 442
v “Viewing health alert summaries” on page 446
v “Viewing, submitting, and saving recommended actions” on page 443
You can also specify that you do not want to receive notification using the Health
Center status beacon.
On the Health Center Status Beacon page, the check boxes on the Health Center
Status Beacon page are enabled by default. Do the following:
v To enable notification through a pop-up message only, select the Notify through
pop-up message check box and deselect the Notify through status line check
box. When you select this method, a DB2 message window indicates that there
are outstanding alerts.
v To enable notification using a status line graphical health beacon only, select the
Notify through status line check box and deselect the Notify through pop-up
message check box. When you select this method, a text message indicating that
there are outstanding alerts and an graphical health beacon display on the Status
line in each center.
v To disable notification, deselect the Notify through pop-up message and Notify
through status line check boxes.
Related concepts:
Related reference:
v “Reading the contents of the health indicator settings fields” in Online DB2
Information Center
On the Scheduler Settings page, select Server Scheduling if you want scheduling
to be handled by the scheduler that is local to the database server, if the scheduler
is enabled on that system. Select Centralized Scheduling if you want the storage
and scheduling of tasks to be handled by a centralized system, in which case you
need to select the centralized system from the Centralized Scheduler list.
If you select Centralized Scheduling , select the centralized system from the
Centralized Scheduler drop-down list. To enable another scheduler, select a system
and click Create New to open a window in which you can create a database for
the DB2 Tools Catalog on a cataloged system. If the system you want to use is not
cataloged, you must catalog it first.
Related concepts:
v “Scheduler” on page 420
v “Task Center overview” on page 416
Related tasks:
v “Enabling scheduling settings in the Task Center” on page 419
v “Tools catalog database and DB2 administration server (DAS) scheduler setup
and configuration” on page 96
On the Command Editor page, set the Execution and history fields:
v Select Automatically commit SQL statements to have any changes made by
SQL statement execution to take effect immediately.
v Select Stop execution if errors occur to have processing stop when there are
errors.
v Select Limit the number of elements stored in command history to control the
amount of command and statement execution history that appears in the
Related concepts:
v “Command Editor overview” in Online DB2 Information Center
Related tasks:
v “Executing commands and SQL statements using the Command Editor” in
Online DB2 Information Center
On the IMS page, check Enable IMS syntax support to assist you when entering
type-2 IMS commands in the Command Editor. When syntax support is enabled,
lists of keywords are automatically displayed as you enter a command.
Check Launch available wizards by default to have the command wizard open
initially from the Control Center. If you uncheck this option, command windows
are opened.
Select how many commands you want to keep in your IMS command results
history.
Related concepts:
Related tasks:
v “Adding DB2 systems and IMSplexes, instances, and databases to the object
tree” on page 390
Visual Explain
This section describes how to use Visual Explain to tune your SQL and XQuery
statements.
The following illustration shows the interaction between the DB2 optimizer and
Visual Explain invoked from the Control Center. (Broken lines indicate actions that
are required for Visual Explain.)
To learn how to use Visual Explain, you can work through the scenarios in the
Visual Explain Tutorial.
Troubleshooting Tips:
v Retrieving the access plan when using LONGDATACOMPAT
v Visual Explain support for earlier and later releases
Related concepts:
v “Access plan” on page 452
v “Access plan graph” on page 453
Related tasks:
v “Dynamically explaining an SQL or an XQuery statement” on page 464
v “Viewing a graphical representation of an access plan” on page 473
v “Viewing explainable statements for a package” on page 474
v “Viewing the history of previously explained query statements” on page 476
Related reference:
v “Explain tables” on page 466
v “Viewing SQL or XQuery statement details and statistics” on page 469
Access plan
Certain data is necessary to resolve an explainable SQL or XQuery statement. An
access plan specifies an order of operations for accessing this data. An access plan
lets you view statistics for selected tables, indexes, or columns; properties for
operators; global information such as table space and function statistics; and
configuration parameters relevant to optimization. With Visual Explain, you can
view the access plan for an SQL or XQuery statement in graphical form.
Cost information associated with an access plan is the optimizer’s best estimate of
the resource usage for a query. The actual elapsed time for a query might vary
depending on factors outside the scope of DB2 (for example, the number of other
applications running at the same time). Actual elapsed time can be measured while
running the query, by using performance monitoring.
Related concepts:
v “Access plan graph” on page 453
v “Visual Explain overview” on page 451
Related tasks:
v “Dynamically explaining an SQL or an XQuery statement” on page 464
v “Viewing a graphical representation of an access plan” on page 473
v “Viewing explainable statements for a package” on page 474
v “Viewing the history of previously explained query statements” on page 476
Related tasks:
v “Viewing the history of previously explained query statements” on page 476
v “Dynamically explaining an SQL or an XQuery statement” on page 464
v “Viewing a graphical representation of an access plan” on page 473
v “Viewing explainable statements for a package” on page 474
Related concepts:
v “Access plan” on page 452
v “Access plan graph” on page 453
Clustering
Over time, updates may cause rows on data pages to change location lowering the
degree of clustering that exists between an index and the data pages. Reorganizing
a table with respect to a chosen index reclusters the data. A clustered index is most
useful for columns that have range predicates because it allows better sequential
access of data in the base table. This results in fewer page fetches, since like values
are on the same data page.
In general, only one of the indexes in a table can have a high degree of clustering.
Related reference:
v “Guidelines for creating indexes” on page 467
Container
A container is a physical storage location of the data. It is associated with a table
space, and can be a file or a directory or a device.
Related concepts:
v “Table spaces” on page 148
Cost
Cost, in the context of Visual Explain, is the estimated total resource usage
necessary to execute the access plan for a statement (or the elements of a
statement). Cost is derived from a combination of CPU cost (in number of
instructions) and I/O (in numbers of seeks and page transfers).
The unit of cost is the timeron. A timeron does not directly equate to any actual
elapsed time, but gives a rough relative estimate of the resources (cost) required by
the database manager to execute two plans for the same query.
The cost shown in each operator node of an access plan graph is the cumulative
cost, from the start of access plan execution up to and including the execution of
that particular operator. It does not reflect factors such as the workload on the
system or the cost of returning rows of data to the user.
Related concepts:
v “Timerons” in Administration Guide: Planning
When DB2 runs a dynamic SQL or XQuery statement, it creates an access plan that
is based on current catalog statistics and configuration parameters. This access plan
might change from one execution of the statements application program to the
next.
Related concepts:
v “Static SQL or XQuery” on page 463
Related tasks:
v “Using explain snapshots” in DB2 Visual Explain Tutorial
Related reference:
v “BIND command” in Command Reference
v “PRECOMPILE command” in Command Reference
Explainable statement
An explainable statement is an SQL or XQuery statement for which an explain
operation can be performed.
Related concepts:
v “Explained statement” on page 456
Explained statement
An explained statement is an SQL or XQuery statement for which an explain
operation has been performed. Explained statements are shown in the Explained
Statements History window.
Operand
An operand is an entity on which an operation is performed. For example, a table
or an index is an operand of various operators such as TBSCAN and IXSCAN.
Related concepts:
v “Operator” on page 457
Operator
An operator is either an action that must be performed on data, or the output from
a table or an index, when the access plan for an SQL or XQuery statement is
executed.
Related concepts:
v “Operand” on page 457
Optimizer
The optimizer is the component of the SQL compiler that chooses an access plan for
a data manipulation language (DML) SQL statement. It does this by modeling the
execution cost of many alternative access plans, and choosing the one with the
minimal estimated cost.
Related concepts:
v “Query optimization class” on page 460
Related reference:
v “BIND command” in Command Reference
v “PRECOMPILE command” in Command Reference
Predicate
A predicate is an element of a search condition that expresses or implies a
comparison operation. Predicates are included in clauses beginning with WHERE
or HAVING.
The following are predicates: NAME = ’SMITH’; DEPT = 895; and YEARS > 5.
Predicates fall into one of the following categories, ordered from most efficient to
least efficient:
1. Starting and stopping conditions bracket (narrow down) an index scan. (These
conditions are also called range-delimiting predicates.)
2. Index-page (also known as index sargable) predicates can be evaluated from an
index because the columns involved in the predicate are part of the index key.
3. Data-page (also known as data sargable) predicates cannot be evaluated from
an index, but can be evaluated while rows remain in the buffer.
4. Residual predicates typically require I/O beyond the simple accessing of a base
table, and must be applied after data is copied out of the buffer page. They
include predicates that contain subqueries, or those that read LONG
VARCHAR or LOB data stored in files separate from the table.
When designing predicates, you should aim for the highest selectivity possible so
that the fewest rows are returned.
The following types of predicates are the most effective and the most commonly
used:
v A simple equality join predicate is required for a merge join. It is of the form
table1.column = table2.column, and allows columns in two different tables to be
equated so that the tables can be joined.
v A local predicate is applied to one table only.
Related concepts:
v “Selectivity of predicates” on page 461
Other query optimization classes, to be used only under special circumstances, are:
0 Minimal optimization. Use only when little or no optimization is required
(that is, for very simple queries on well-indexed tables).
9 Maximum optimization. Uses substantial memory and processing
resources. Use only if class 5 is insufficient (that is, for very complex and
long-running queries that do not perform well at class 5).
In general, use a higher optimization class for static queries and for queries that
you anticipate will take a long time to execute, and a lower optimization class for
simple queries that are submitted dynamically or that are run only a few times.
To set the query optimization for dynamic SQL or XQuery statements, enter the
following command in the command line processor:
SET CURRENT QUERY OPTIMIZATION = n;
To set the query optimization for static SQL or XQuery statements, use the
QUERYOPT option on the BIND or PREP commands.
Related concepts:
v “Optimizer” on page 458
Related reference:
v “BIND command” in Command Reference
v “PRECOMPILE command” in Command Reference
Cursor blocking
Cursor blocking is a technique that reduces overhead by having the database
manager retrieve a block of rows in a single operation. These rows are stored in a
cache while they are processed. The cache is allocated when an application issues
Use the BLOCKING option on the PREP or BIND commands along with the
following parameters to specify the type of cursor blocking:
UNAMBIG
Only unambiguous cursors are blocked (the default).
ALL Both ambiguous and unambiguous cursors are blocked.
NO Cursors are not blocked.
Related tasks:
v “Specifying row blocking to reduce overhead” in Performance Guide
Related reference:
v “BIND command” in Command Reference
v “PRECOMPILE command” in Command Reference
Selectivity of predicates
Selectivity refers to the probability that any row will satisfy a predicate (that is, be
true).
For example, a selectivity of 0.01 (1%) for a predicate operating on a table with
1,000,000 rows means that the predicate returns an estimated 10,000 rows (1% of
1,000,000), and discards an estimated 990,000 rows.
A highly selective predicate (one with a selectivity of 0.10 or less) is desirable. Such
predicates return fewer rows for future operators to work on, thereby requiring
less CPU and I/O to satisfy the query.
Example
Suppose that you have a table of 1,000,000 rows, and that the original query
contains an ’ORDER BY’ clause requiring an additional sorting step. With a
predicate that has a selectivity of 0.01, the sort would have to be done on an
estimated 10,000 rows. However, with a less selective predicate of 0.50, the sort
would have to be done on an estimated 500,000 rows, thus requiring more CPU
and I/O time.
Related concepts:
v “Predicate” on page 459
Sequences
A sequence is a database object that allows the automatic generation of values.
Sequences are ideally suited to the task of generating unique key values.
Applications can use sequences to avoid possible concurrency and performance
problems resulting from the generation of a unique counter outside the database.
The PREVVAL expression returns the most recently generated value for the
specified sequence for a previous statement within the current application process.
The NEXTVAL expression returns the next value for the specified sequence. A new
sequence number is generated when a NEXTVAL expression specifies the name of
the sequence. However, if there are multiple instances of a NEXTVAL expression
specifying the same sequence name within a query, the counter for the sequence is
incremented only once for each row of the result, and all instances of NEXTVAL
return the same value for a row of the result.
The same sequence number can be used as a unique key value in two separate
tables by referencing the sequence number with a NEXTVAL expression for the
first row, and a PREVVAL expression for any additional rows.
For example:
INSERT INTO order (orderno, custno)
VALUES (NEXTVAL FOR order_seq, 123456);
INSERT INTO line_item (orderno, partno, quantity)
VALUES (PREVVAL FOR order_seq, 987654, 1)
Related tasks:
v “Creating a sequence” on page 234
Star join
A set of joins are considered to be a star join when a fact table (large central table)
is joined to two or more dimension tables (smaller tables containing descriptions of
the column values in the fact table).
A Semijoin is a special form of join in which the result of the join is only the Row
Identifier (RID) of the inner table, instead of the joining of the inner and outer
table columns.
Star joins use Semijoins to supply Row Identifiers to an Index ANDing operator.
The Index ANDing operator accumulates the filtering affect of the various joins.
The output from the Index ANDing operator is fed into an Index ORing operator,
which orders the Row Identifiers, and eliminates any duplicate rows that may have
Performance suggestions:
v Create indexes on the fact table for each of the dimension table joins.
v Ensure the sort heap threshold is high enough to allow allocating the Index
ANDing operator’s bit filter. For star joins, this could require as much as 12MB,
or 3000 4K pages. For Intra-partition parallelism, the bit filter is allocated from
the same shared memory segment as the shared sort heap, and it is bounded by
the sortheap database configuration parameter and the sheapthres_shr database
configuration parameter.
v Apply filtering predicates against the dimension tables. If statistics are not
current, update them using the runstats command.
Related reference:
v “IXAND operator” in Performance Guide
v “RUNSTATS command” in Command Reference
v “Using RUNSTATS” on page 468
When DB2 compiles these statements, it creates an access plan for each one that is
based on the catalog statistics and configuration parameters at the time that the
statements were precompiled and bound.
These access plans are always used when the application is run; they do not
change until the package is bound again.
Related tasks:
v “Executing XQuery expressions in embedded SQL applications” in Developing
Embedded SQL Applications
Visual Explain
Note: As of Version 6, Visual Explain can no longer be invoked from the command
line. It can still, however, be invoked from various database objects in the
Control Center. For this version, the documentation continues to use the
name Visual Explain.
Visual Explain lets you view the access plan for explained SQL or XQuery
statements as a graph. You can use the information available from the graph to
tune your queries for better performance.
Related concepts:
v “Visual Explain overview” on page 451
An explained statement record is added to the Explained Statements History for all
successful operations.
Prerequisites:
To dynamically explain query statements, you will need at least the INSERT
privilege on the explain tables.
Procedure:
Note:
– If you select Explain Query from the Control Center, the Query text
field will be empty.
Related concepts:
v “Visual Explain overview” on page 451
Related tasks:
v “Viewing explainable statements for a package” on page 474
v “Viewing the history of previously explained query statements” on page 476
v “Viewing a graphical representation of an access plan” on page 473
Related reference:
v “Viewing SQL or XQuery statement details and statistics” on page 469
Procedure:
the icon. The results are displayed on the Results page. To view
the generated access plan, click on the Access Plan tab.
Related concepts:
v “Command Editor overview” in Online DB2 Information Center
v “Explainable statement” on page 456
v “Access plan” on page 452
v “Visual Explain overview” on page 451
Related tasks:
v “Executing commands and SQL statements using the Command Editor” in
Online DB2 Information Center
Explain tables
To create explain snapshots, you must ensure that the following explain tables exist
for your user ID:
v EXPLAIN_INSTANCE
v EXPLAIN_STATEMENT
To check if they exist, use the DB2 list tables command. If you would like to use
Visual Explain, and these tables do not exist, you must create them using the
following instructions:
1. If DB2 has not already been started, issue the db2start command.
2. From the DB2 CLP prompt, connect to the database that you want to use. To
connect to the SAMPLE database, issue the connect to sample command.
3. Create the explain tables, using the sample command file that is provided in
the EXPLAIN.DDL file. This file is located in the sqllib\misc directory. To run the
command file, go to this directory and issue the db2 -tf EXPLAIN.DDL
command. This command file creates explain tables that are prefixed with the
connected user ID. This user ID must have CREATETAB privilege on the
database, or SYSADM or DBADM authority.
Note: Before you run db2 -tvf EXPLAIN.DDL, ensure that explain tables for the
schema name do not exist. If you have migrated from an earlier version, you
need to run db2exmig to migrate the explain tables.
Related tasks:
v “Viewing the history of previously explained query statements” on page 476
v “Dynamically explaining an SQL or an XQuery statement” on page 464
v “Viewing explainable statements for a package” on page 474
Related concepts:
v “Space requirements for indexes” in Administration Guide: Planning
v “Visual Explain overview” on page 451
Related tasks:
v “Estimating space requirements for tables and indexes” on page 272
Related concepts:
v “Access plan” on page 452
v “Visual Explain overview” on page 451
Possible cause:
If the value for LONGDATACOMPAT is set to 1 in the db2cli.ini file, the Visual
Explain access plan can be generated but cannot be retrieved.
Action:
As a work around, a database alias can be created for that database with
LONGDATACOMPAT set to 0. For example:
DB2 UPDATE CLI CFG FOR SECTION db-alias-name USING LONGDATACOMPAT 0
To check the CLI configuration values, the following command can be used:
GET CLI CONFIGURATION [AT GLOBAL LEVEL] [FOR SECTION section-name]
Related concepts:
v “Access plan” on page 452
Using RUNSTATS
The optimizer uses the catalog tables from a database to obtain information about
the database, the amount of data in it, and other characteristics, and uses this
information to choose the best way to access the data. If current statistics are not
available, the optimizer might choose an inefficient access plan based on inaccurate
default statistics.
It is highly recommended that you use the RUNSTATS command to collect current
statistics on tables and indexes, especially if significant update activity has
occurred or new indexes have been created since the last time the RUNSTATS
command was executed. This provides the optimizer with the most accurate
information with which to determine the best access plan.
Be sure to use RUNSTATS after making your table updates; otherwise, the table
might appear to the optimizer to be empty. This problem is evident if cardinality
on the Operator Details window equals zero. In this case, complete your table
updates, rerun the RUNSTATS command and recreate the explain snapshots for
affected tables.
Note:
v Use RUNSTATS on all tables and indexes that might be accessed by a query.
v The quantile and frequent value statistics determine when data is unevenly
distributed. To update these values, use RUNSTATS on a table with the WITH
DISTRIBUTION clause.
v In addition to statistics, other factors (such as the ordering of qualifying rows,
table size, and buffer pool size) might influence how an access plan is selected.
The RUNSTATS command (which can be entered from the DB2 CLP prompt) can
provide different levels of statistics as shown in the following syntax:
Basic Statistics
Table:
RUNSTATS ON TABLE tablename
Index:
RUNSTATS ON TABLE tablename FOR INDEXES ALL
Both tables and indexes:
RUNSTATS ON TABLE tablename AND INDEXES ALL
Enhanced Statistics
Table:
RUNSTATS ON TABLE tablename WITH DISTRIBUTION
Index:
RUNSTATS ON TABLE tablename FOR DETAILED INDEXES ALL
Both tables and indexes:
RUNSTATS ON TABLE tablename WITH DISTRIBUTION AND
DETAILED INDEXES ALL
Note: In each of the above commands, the tablename must be fully qualified with
the schema name.
Related concepts:
v “Visual Explain overview” on page 451
Related reference:
v “RUNSTATS command” in Command Reference
Related concepts:
v “Visual Explain overview” on page 451
Related tasks:
v “Viewing a graphical representation of an access plan” on page 473
v “Viewing explainable statements for a package” on page 474
472 Administration Guide: Implementation
v “Viewing the history of previously explained query statements” on page 476
Tasks:
v Use the Statement menu to print the graph, to dynamically explain an SQL or
XQuery statement, to view the text or optimized text, or to view optimization
parameters or statistics.
v Use the Node menu to view details or statistics on the nodes, or to get
additional help on each of the operators.
v Use the View menu to change the graph settings or to see an overview of the
graph. This is particularly useful for large graphs.
From this window, you can view details about the following objects:
v Table spaces and table space statistics
v Functions and function statistics
v Operators
v Partitioned databases
v Operands
– Column distribution statistics
– Index and index statistics
– Page fetch pairs statistics
– Column groups
– Referenced columns, referenced column groups, and referenced column
statistics
– Table function statistics and table statistics
To open the Access Plan Graph window, use one of the following methods:
1. Open either the Explainable Statements, or the Explained Statements History
window. Select Statement–>Show Access Plan. The Access Plan Graph
window opens.
2. Invoke Explain Query from either the Explainable Statements or the Explained
Statements History window. The Explain Query statement window opens as a
result of the dynamic explain.
Troubleshooting Tips:
v Retrieving the access plan when using LONGDATACOMPAT
v Visual Explain support for earlier and later releases
Related concepts:
v “Access plan” on page 452
v “Access plan graph node” on page 454
v “Operator” on page 457
v “Operand” on page 457
v “Cost” on page 455
v “Visual Explain overview” on page 451
Related tasks:
v “Viewing explainable statements for a package” on page 474
v “Viewing the history of previously explained query statements” on page 476
Related reference:
v “Retrieving the access plan when using LONGDATACOMPAT” on page 468
v “Visual Explain support for earlier and later releases” on page 478
v “Viewing SQL or XQuery statement details and statistics” on page 469
If an explain snapshot has been taken for a statement, you can use this list to view
additional information about that statement (such as its total cost and a graphical
view of its access plan).
Tasks:
v Use the Statement menu to view the history of previously explained SQL or
XQuery statements, to view a graphical representation of the access plan, to
dynamically explain a query statement, and to view text for a query statement.
The columns in the window provide the following information about SQL or
XQuery statements:
Statement number
The line number of the SQL or XQuery statement in the source module of
the application program. For static queries, this number corresponds to the
STMTNO column in the SYSCAT.STATEMENTS table.
Section number
The number of the section within the package that is associated with the
SQL or XQuery statement.
Explain snapshot
States whether an explain snapshot has been taken for the SQL or XQuery
statement. (If it has not been taken, you cannot view an access plan graph
for the statement.)
Total cost
The estimated total cost (in timerons) of returning the query results for the
selected SQL or XQuery statement. (Available only if the package
containing the statement has been explained previously.)
Query text
The first 100 characters of the query statement. (Use the scroll bar at the
bottom of the window to scroll through it.) To view the complete SQL or
XQuery statement, select Statement–>Show Query Text.
Troubleshooting Tips:
v Retrieving the access plan when using LONGDATACOMPAT
v Visual Explain support for earlier and later releases
Related concepts:
v “Package” on page 459
v “Explain snapshot” on page 456
v “Cost” on page 455
v “Access plan” on page 452
v “Visual Explain overview” on page 451
Related reference:
v “Retrieving the access plan when using LONGDATACOMPAT” on page 468
v “Visual Explain support for earlier and later releases” on page 478
v “Viewing SQL or XQuery statement details and statistics” on page 469
Tasks:
v Use the Statement menu to view a graphical representation of an access plan, to
dynamically explain a query statement, to view text for a query statement, or to
change or remove a query statement.
v Use the View menu, or the icons on the secondary toolbar to sort, filter, or
customize the explainable statements. You can also save the contents of this
window using the options in this menu.
The columns in the window provide the following information about the query
statements that have been explained:
Package name
The name of the package that either:
v Contains the SQL or XQuery statement (in the case of a static query)
v Issued the SQL or XQuery statement (in the case of a dynamic query).
Package creator
The user ID of the user who created the package.
Package version
The version number of the package.
Explain snapshot
States whether an explain snapshot has been taken for the SQL or XQuery
statement. (If it has not, you cannot view an access plan graph for the
statement.)
Latest bind
If the statement is contained in a package, this field indicates whether or
not the statement is associated with the latest bound package.
Dynamic explain
States whether the explained query statement was dynamic. (If it was not,
it was a Static SQL or XQuery statement in a package.)
Explain date
The date when the statement had an explain operation performed on it.
Explain time
The time when the statement had an explain operation performed on it.
Total cost
The estimated total cost (in timerons) of the statement.
Statement number
The line number of the SQL or XQuery statement in the source module of
the application program.
Section number
The number of the section within the package that is associated with the
SQL or XQuery statement.
Query number
The query number that is associated with the statement.
Query tag
The query tag that is associated with the statement.
Query text
The first 100 characters of the original SQL or XQuery statement. (Use the
scroll bar at the bottom of the window to scroll through it.) To view the
complete SQL or XQuery statement, select Statement–>Show Query Text.
Troubleshooting Tips:
v Retrieving the access plan when using LONGDATACOMPAT
v Visual Explain support for earlier and later releases
Related concepts:
v “Explained statement” on page 456
v “Package” on page 459
v “Explain snapshot” on page 456
v “Dynamic SQL or XQuery” on page 455
v “Static SQL or XQuery” on page 463
v “Cost” on page 455
v “Visual Explain overview” on page 451
Related tasks:
v “Viewing a graphical representation of an access plan” on page 473
v “Viewing explainable statements for a package” on page 474
Related reference:
v “Retrieving the access plan when using LONGDATACOMPAT” on page 468
v “Visual Explain support for earlier and later releases” on page 478
v “Viewing SQL or XQuery statement details and statistics” on page 469
Related concepts:
v “Visual Explain” on page 463
Planning for Security: Start by defining your objectives for a database access
control plan, and specifying who shall have access to what and under what
circumstances. Your plan should also describe how to meet these objectives by
using database functions, functions of other programs, and administrative
procedures.
To complete the installation of the DB2 database manager, a user ID, a group
name, and a password are required. The GUI-based DB2 database manager install
program creates default values for different user IDs and the group. Different
defaults are created, depending on whether you are installing on UNIX or
Windows platforms:
v On UNIX and Linux platforms, if you choose to create a DB2 instance in the
instance setup window, the DB2 database install program creates, by default,
different users for the DAS (dasusr), the instance owner (db2inst), and the
fenced user (db2fenc). Optionally, you can specify different user names
The DB2 database install program appends a number from 1-99 to the default
user name, until a user ID that does not already exist can be created. For
example, if the users db2inst1 and db2inst2 already exist, the DB2 database
install program creates the user db2inst3. If a number greater than 10 is used,
To minimize the risk of a user other than the administrator from learning of the
defaults and using them in an improper fashion within databases and instances,
change the defaults during the install to a new or existing user ID of your choice.
Note: Response file installations do not use default values for user IDs or group
names. These values must be specified in the response file.
When working with DB2 Data Partitioning Feature (DPF) on UNIX operating
system environments, the DB2 database manager by default uses the rsh utility
(remsh on HP-UX) to run some commands on remote nodes. The rsh utility
transmits passwords in clear text over the network, which can be a security
exposure if the DB2 server is not on a secure network. You can use the
DB2RSHCMD registry variable to set the remote shell program to a more secure
alternative that avoids this exposure. One example of a more secure alternative is
ssh. See the DB2RSHCMD registry variable documentation for restrictions on
remote shell configurations.
After installing the DB2 database manager, also review, and change (if required),
the default privileges that have been granted to users. By default, the installation
process grants system administration (SYSADM) privileges to the following users
on each operating system:
Windows environments A valid DB2 database user name that belongs to
the Administrators group.
UNIX platforms A valid DB2 database user name that belongs to
the primary group of the instance owner.
SYSADM privileges are the most powerful set of privileges available within the
DB2 database manager. As a result, you might not want all of these users to have
SYSADM privileges by default. The DB2 database manager provides the
administrator with the ability to grant and revoke privileges to groups and
individual user IDs.
Any group defined as the system administration group (by updating sysadm_group)
must exist. The name of this group should allow for easy identification as the
group created for instance owners. User IDs and groups that belong to this group
have system administrator authority for their respective instances.
The administrator should consider creating an instance owner user ID that is easily
recognized as being associated with a particular instance. This user ID should have
as one of its groups the name of the SYSADM group created above. Another
recommendation is to use this instance-owner user ID only as a member of the
instance owner group and not to use it in any other group. This should control the
proliferation of user IDs and groups that can modify the instance, or any object
within the instance.
Related concepts:
v “General naming rules” on page 663
v “User, user ID and group naming rules” on page 666
v “Authentication” in Administration Guide: Planning
v “Authorization” in Administration Guide: Planning
v “Naming rules in a Unicode environment” on page 669
v “Naming rules in an NLS environment” on page 668
v “Location of the instance directory” on page 489
v “UNIX platform security considerations for users” on page 489
v “Windows platform security considerations for users” on page 485
Related reference:
v “Communications variables” in Performance Guide
When you log on, the system verifies your password by comparing it with
information stored in a security database. If the password is authenticated, the
system produces an access token. Every process run on your behalf uses a copy of
this access token.
An access token can also be acquired based on cached credentials. Once you have
been authenticated to the system, your credentials are cached by the operating
The access token includes information about all of the groups you belong to: local
groups and various domain groups (global groups, domain local groups, and
universal groups).
Note: Group lookup using client authentication is not supported using a remote
connection even though access token support is enabled.
To enable access token support, you must use the db2set command to update the
DB2_GRP_LOOKUP registry variable. Your choices when updating this registry
variable include:
v TOKEN
This choice enables access token support to lookup all groups that the user
belongs to at the location where the user account is defined. This location is
typically either at the domain or local to the DB2 database server.
v TOKENLOCAL
This choice enables access token support to lookup all local groups that the user
belongs to on the DB2 database server.
v TOKENDOMAIN
This choice enables access token support to lookup all domain groups that the
user belongs to on the domain.
When enabling access token support, there are several limitations that affect your
account management infrastructure. When this support is enabled, the DB2
database system collects group information about the user who is connecting to the
database. Subsequent operations after a successful CONNECT or ATTACH request
that have dependencies on other authorization IDs will still need to use
conventional group enumeration. The access token advantages of nested global
groups, domain local groups, and cached credentials will not be available. For
example, if, after a connection, the SET SESSION_USER is used to run under
another authorization ID, only the conventional group enumeration is used to
check what rights are given to the new authorization ID for the session. You will
still need to grant and revoke explicit privileges to individual authorization IDs
known to the DB2 database system, as opposed to the granting and revoking of
privileges to groups to which the authorization IDs belongs.
You should consider using the DB2_GRP_LOOKUP registry variable and specify
the group lookup location to indicate where the DB2 database system should look
up groups using the conventional group enumeration methodology. For example,
db2set DB2_GRP_LOOKUP=LOCAL,TOKENLOCAL
This enables the access token support for enumerating local groups. Group lookup
for an authorization ID different from the connected user is performed at the DB2
database server.
db2set DB2_GRP_LOOKUP=,TOKEN
This enables the access token support for enumerating domain groups. Group
lookup for an authorization ID different from the connected user is performed
where the user ID is defined.
Access token support can be enabled with all authentications types except CLIENT
authentication.
Related concepts:
v “Security issues when installing the DB2 database manager” on page 481
To avoid adding a domain user to the Administrators group at the PDC, you
should create a global group and add the users (both domain and local) that you
want to grant SYSADM authority. To do this, enter the following commands:
DB2STOP
DB2 UPDATE DBM CFG USING SYSADM_GROUP global_group
DB2START
Related concepts:
v “UNIX platform security considerations for users” on page 489
Group information for the LSA is gathered at the first group lookup request after
the DB2 database instance is started and will not be refreshed until the instance is
restarted.
Note: Applications running under the context of the local system account (LSA)
are supported on all Windows platforms, except Windows ME.
Related concepts:
v “Security issues when installing the DB2 database manager” on page 481
The DB2ADMNS and DB2USERS groups provide members with the following
abilities:
v DB2ADMNS
Full control over all DB2 objects (see the list of protected objects, below)
v DB2USERS
Read and Execute access for all DB2 objects located in the installation and
instance directories, but no access to objects under the database system directory
and limited access to IPC resources
Note: The meaning of Execute access depends on the object; for example, for a
.dll or .exe file having Execute access means you have authority to
execute the file, however, for a directory it means you have authority to
traverse the directory.
If the DB2 database system was installed without extended security enabled, you
can enable it by executing the command db2extsec (called db2secv82 in earlier
releases). To execute the db2extsec command you must be a member of the local
Administrators group so that you have the authority to modify the ACL of the
protected objects.
You can run the db2extsec command multiple times, if necessary, however, if this
is done, you cannot disable extended security unless you issue the db2extsec –r
command immediately after each execution of db2extsec.
CAUTION:
It is not recommend to remove extended security once it has been enabled.
You can remove extended security by running the command db2extsec -r,
however, this will only succeed if no other database operations (such as creating a
database, creating a new instance, adding table spaces, and so on) have been
performed after enabling extended security. The safest way to remove the extended
security option is to uninstall the DB2 database system, delete all the relevant DB2
directories (including the database directories) and then reinstall the DB2 database
system without extended security enabled.
Protected objects:
The static objects that can be protected using the DB2ADMNS and DB2USERS
groups are:
v File system
– File
– Directory
v Services
v Registry keys
The privileges assigned to the DB2ADMNS and DB2USERS groups are listed in the
following table:
Table 27. Privileges for DB2ADMNS and DB2USERS groups
Privilege DB2ADMNS DB2USERS Reason
Create a token object Y N Token manipulation (required for certain
(SeCreateTokenPrivilege) token manipulation operations and used in
authentication and authorization)
Replace a process level token Y N Create process as another user
(SeAssignPrimaryTokenPrivilege)
Increase quotas Y N Create process as another user
(SeIncreaseQuotaPrivilege)
Act as part of the operating system Y N LogonUser (required prior to Windows XP
(SeTcbPrivilege) in order to execute the LogonUser API for
authentication purposes)
Generate security audits Y N Manipulate audit and security log
(SeSecurityPrivilege)
Take ownership of files or other Y N Modify object ACLs
objects (SeTakeOwnershipPrivilege)
Increase scheduling priority Y N Modify the process working set
(SeIncreaseBasePriorityPrivilege)
Backup files and directories Y N Profile/Registry manipulation (required to
(SeBackupPrivilege) perform certain user profile and registry
manipulation routines: LoadUserProfile,
RegSaveKey(Ex), RegRestoreKey,
RegReplaceKey, RegLoadKey(Ex))
Restore files and directories Y N Profile/Registry manipulation (required to
(SeRestorePrivilege) perform certain user profile and registry
manipulation routines: LoadUserProfile,
RegSaveKey(Ex), RegRestoreKey,
RegReplaceKey, RegLoadKey(Ex))
Debug programs (SeDebugPrivilege) Y N Token manipulation (required for certain
token manipulation operations and used in
authentication and authorization)
Manage auditing and security log Y N Generate auditing log entries
(SeAuditPrivilege)
Log on as a service Y N Run DB2 as a service
(SeServiceLogonRight)
Related tasks:
v “Adding your user ID to the DB2ADMNS and DB2USERS user groups
(Windows)” in Quick Beginnings for DB2 Servers
Related reference:
v “Required user accounts for installation of DB2 server products (Windows)” in
Quick Beginnings for DB2 Servers
v “db2extsec - Set permissions for DB2 objects command” in Command Reference
For security reasons, we recommend you do not use the instance name as the
Fenced ID. However, if you are not planning to use fenced UDFs or stored
procedures, you can set the Fenced ID to the instance name instead of creating
another user ID.
Related concepts:
v “Windows platform security considerations for users” on page 485
Related concepts:
v “Instance creation” on page 34
Related tasks:
v “Creating additional instances” on page 38
Security plug-ins
Authentication in DB2® Universal Database (DB2 UDB) is done through security
plug-ins. For more information, see Security plug-ins in the Administrative API
Reference.
If you intend to access data sources from a federated database, you must consider
data source authentication processing and definitions for federated authentication
types.
Note: You can check the following web site for certification information on the
cryptographic routines used by the DB2 database management system to
perform encryption of the userid and password when using
SERVER_ENCRYPT authentication, and of the userid, password and user
data when using DATA_ENCRYPT authentication: http://www.ibm.com/
security/standards/st_evaluations.shtml.
Note: It is possible to trust all clients (trust_allclnts is YES) yet have some
of those clients as those who do not have a native safe security
system for authentication.
You may also want to complete authentication at the server even for
trusted clients. To indicate where to validate trusted clients, you use the
trust_clntauth configuration parameter. The default for this parameter is
CLIENT.
KERBEROS
Used when both the DB2 client and server are on operating systems that
support the Kerberos security protocol. The Kerberos security protocol
performs authentication as a third party authentication service by using
conventional cryptography to create a shared secret key. This key becomes
a user’s credential and is used to verify the identity of users during all
occasions when local or network services are requested. The key eliminates
the need to pass the user name and password across the network as clear
text. Using the Kerberos security protocol enables the use of a single
sign-on to a remote DB2 database server. The KERBEROS authentication
type is supported on clients and servers running Windows, AIX, and
Solaris operating environment.
Kerberos authentication works as follows:
1. A user logging on to the client machine using a domain account
authenticates to the Kerberos key distribution center (KDC) at the
domain controller. The key distribution center issues a ticket-granting
ticket (TGT) to the client.
2. During the first phase of the connection the server sends the target
principal name, which is the service account name for the DB2 database
server service, to the client. Using the server’s target principal name
and the target-granting ticket, the client requests a service ticket from
the ticket-granting service (TGS) which also resides at the domain
controller. If both the client’s ticket-granting ticket and the server’s
target principal name are valid, the TGS issues a service ticket to the
client. The principal name recorded in the database directory may now
be specified as name/instance@REALM. (This is in addition to the
current DOMAIN\userID and userID@xxx.xxx.xxx.com formats
accepted on Windows with DB2 UDB Version 7.1 and following.)
3. The client sends this service ticket to the server using the
communication channel (which may be, as an example, TCP/IP).
4. The server validates the client’s server ticket. If the client’s service
ticket is valid, then the authentication is completed.
It is possible to catalog the databases on the client machine and explicitly
specify the Kerberos authentication type with the server’s target principal
name. In this way, the first phase of the connection can be bypassed.
Note: The Kerberos authentication types are only supported on clients and
servers running Windows, and AIX operating systems, as well as
Solaris operating environment. Also, both client and server machines
must either belong to the same Windows domain or belong to
trusted domains. This authentication type should be used when the
server supports Kerberos and some, but not all, of the client
machines support Kerberos authentication.
DATA_ENCRYPT
The server accepts encrypted SERVER authentication schemes and the
encryption of user data. The authentication works exactly the same way as
that shown with SERVER_ENCRYPT. See that authentication type for more
information.
The following user data are encrypted when using this authentication type:
v SQL and XQuery statements.
v SQL program variable data.
v Output data from the server processing of an SQL or XQuery statement
and including a description of the data.
v Some or all of the answer set data resulting from a query.
v Large object (LOB) data streaming.
v SQLDA descriptors.
DATA_ENCRYPT_CMP
The server accepts encrypted SERVER authentication schemes and the
encryption of user data. In addition, this authentication type allows
compatibility with down level products not supporting DATA_ENCRYPT
authentication type. These products are permitted to connect with the
SERVER_ENCRYPT authentication type and without encrypting user data.
Products supporting the new authentication type must use it. This
authentication type is only valid in the server’s database manager
configuration file and is not valid when used on on the CATALOG
DATABASE command.
GSSPLUGIN
Specifies that the server uses a GSS-API plug-in to perform authentication.
If the client authentication is not specified, the server returns a list of
server-supported plug-ins, including any Kerberos plug-in that is listed in
the srvcon_gssplugin_list database manager configuration parameter, to the
client. The client selects the first plug-in found in the client plug-in
directory from the list. If the client does not support any plug-in in the list,
the client is authenticated using the Kerberos authentication scheme (if it is
Related concepts:
v “Authentication considerations for remote clients” on page 495
v “DB2 and Windows security introduction” on page 675
v “Partitioned database authentication considerations” on page 496
Related reference:
v “authentication - Authentication type configuration parameter” in Performance
Guide
v “trust_allclnts - Trust all clients configuration parameter” in Performance Guide
v “trust_clntauth - Trusted clients authentication configuration parameter” in
Performance Guide
The authentication type is not required. If it is not specified, the client will default
to SERVER_ENCRYPT. However, if the server does not support
SERVER_ENCRYPT, the client attempts to retry using a value supported by the
server. If the server supports multiple authentication types, the client will not
choose among them, but instead returns an error. The error is returned to ensure
that the correct authentication type is used. In this case, the client must catalog the
database using a supported authentication type. If an authentication type is
specified, authentication can begin immediately provided that value specified
matches that at the server. If a mismatch is detected, DB2 database attempts to
recover. Recovery may result in more flows to reconcile the difference, or in an
error if the DB2 database cannot recover. In the case of a mismatch, the value at
the server is assumed to be correct.
When these are all true, the client cannot connect to the server. To allow the
connection, you must either upgrade your client to Version 8, or have your
gateway level at Version 8 FixPak 6 or earlier.
Related concepts:
v “Authentication methods for your server” on page 490
Related concepts:
v “Authentication methods for your server” on page 490
Note: For 64-bit Windows, the plugin library is called IBMkrb564.dll. Furthermore,
the actual plugin source code for the UNIX and Linux plugin, IBMkrb5.C, is
available in the sqllib/samples/security/plugins directory.
Kerberos set-up:
DB2 database system and its support of Kerberos relies upon the Kerberos layer
being installed and configured properly on all machines involved prior to the
involvement of DB2 database. This includes, but is not necessarily limited to, the
following requirements:
1. The client and server machines and principals must belong to the same realm,
or else trusted realms (or trusted domains in the Windows terminology)
2. Creation of appropriate principals
3. Creation of server keytab files, where appropriate
4. All machines involved must have their system clocks synchronized (Kerberos
typically permits a 5 minute time skew, otherwise a preauthentication error
may occur when obtaining credentials).
The sole concern of DB2 database system will be whether the Kerberos security
context is successfully created based on the credentials provided by the connecting
application (that is, authentication). Other Kerberos features, such as the signing or
encryption of messages, will not be used. Furthermore, whenever available, mutual
authentication will be supported.
The principal may be found in either a 2-part or multi-part format, (that is,
name@REALM or name/instance@REALM). As the “name” part will be used in the
authorization ID (AUTHID) mapping, the name must adhere to the DB2 database
naming rules. This means that the name may be up to 30 characters long and it
must adhere to the existing restrictions on the choice of characters used. (AUTHID
mapping is discussed in a later topic.)
Unlike operating system user IDs whose scope of existence is normally restricted
to a single machine (NIS being a notable exception), Kerberos principals have the
ability to be authenticated in realms other than their own. The potential problem of
duplicated principal names is avoided by using the realm name to fully qualify the
principal. In Kerberos, a fully qualified principal takes the form
name/instance@REALM where the instance field may actually be multiple instances
separated by a “/”, that is, name/instance1/instance2@REALM, or it may be omitted
altogether. The obvious restriction is that the realm name must be unique within
all the realms defined within a network. The problem for DB2 database is that in
order to provide a simple mapping from the principal to the AUTHID, a
one-to-one mapping between the principal name, that is, the “name” in the fully
qualified principal, and the AUTHID is desirable. A simple mapping is needed as
the AUTHID is used as the default schema in DB2 database and should be easily
and logically derived. As a result, the database administrator needs to be aware of
the following potential problems:
v Principals from different realms but with the same name will be mapped to the
same AUTHID.
v Principals with the same name but different instances will be mapped to the
same AUTHID.
On UNIX or Linux, the server principal name for the DB2 database instance is
assumed to be <instance name>/<fully qualified hostname>@REALM. This principal
must be able to accept Kerberos security contexts and it must exist before starting
the DB2 database instance since the server name is reported to DB2 database by
the plugin at initialization time.
On Windows, the server principal is taken to be the domain account under which
the DB2 database service started. An exception to this is the instance may be
started by the local SYSTEM account, in which case, the server principal name is
reported as host/<hostname>; this is only valid if both the client and server belong
to Windows domains.
Windows does not support greater than 2-part names. This poses a problem when
a Windows client attempts to connect to a UNIX server. As a result, a Kerberos
principal to Windows account mapping may need to be set up in the Windows
domain if interoperability with UNIX Kerberos is required. (Please refer to the
appropriate Microsoft documentation for relevant instructions.)
You can override the Kerberos server principal name used by the DB2 server on
UNIX and Linux operating systems. Set the DB2_KRB5_PRINCIPAL environment
Every Kerberos service on UNIX or Linux wishing the accept security context
requests must place its credentials in a keytab file. This applies to the principals
used by DB2 database as server principals. Only the default keytab file is searched
for the server’s key. For instructions on adding a key to the keytab file, please refer
to the documentation provided with the Kerberos product.
Kerberos is an authentication protocol that does not possess the concept of groups.
As a result, DB2 database relies upon the local operating system to obtain a group
list for the Kerberos principal. For UNIX or Linux, this requires that an equivalent
system account should exist for each principal. For example, for the principal
name@REALM, DB2 database collects group information by querying the local
operating system for all group names to which the operating system user name
belongs. If an operating system user does not exist, then the AUTHID will only
belong to the PUBLIC group. Windows, on the other hand, automatically associates
a domain account to a Kerberos principal and the additional step to create a
separate operating system account is not required.
However, if the authentication information is not provided, then the server sends
the client the name of the server principal.
There are several considerations you should consider when creating a Kerberos
plugin:
v Write a Kerberos plugin as a GSS-API plugin with the notable exception that the
plugintype in the function pointer array returned to DB2 database in the
initialization function must be set to DB2SEC_PLUGIN_TYPE_KERBEROS.
v Under certain conditions, the server principal name may be reported to the
client by the server. As such, the principal name should not be specified in the
GSS_C_NT_HOSTBASED_SERVICE format (service@host), since DRDA
stipulates that the principal name be in the GSS_C_NT_USER_NAME format
(server/host@REALM).
v In a typical situation, the default keytab file may be specified by the
KRB5_KTNAME environment variable. However, as the server plugin will run
within a DB2 database engine process, this environment variable may not be
accessible.
Linux prerequisites:
The provided DB2 Kerberos security plug-in is supported with Red Hat Enterprise
Linux Advanced Server 3 with the IBM Network Authentication Service (NAS) 1.4
client.
For connections to zSeries and iSeries, the database must be cataloged with the
AUTHENTICATION KERBEROS parameter and the TARGET PRINCIPAL
parameter name must be explicitly specified.
Windows issues:
When you are using Kerberos on Windows platforms, you need to be aware of the
following issues:
v Due to the manner in which Windows detects and reports some errors, the
following conditions result in an unexpected client security plug-in error
(SQL30082N, rc=36):
– Expired account
– Invalid password
– Expired password
– Password change forced by administrator
– Disabled account
Furthermore, in all cases, the DB2 administration log or db2diag.log will
indicate "Logon failed" or "Logon denied".
v If a domain account name is also defined locally, connections explicitly
specifying the domain name and password will fail with the following error: The
Local Security Authority cannot be contacted.
Related concepts:
v “Authentication methods for your server” on page 490
The database manager requires that each user be specifically authorized, either
implicitly or explicitly, to use each database function needed to perform a specific
task. Explicit authorities or privileges are granted to the user (GRANTEETYPE of U
in the database catalogs). Implicit authorities or privileges are granted to a group to
which the user belongs (GRANTEETYPE of G in the database catalogs).
Administrative authority:
The person or persons holding administrative authority are charged with the task
of controlling the database manager and are responsible for the safety and integrity
of the data. Those with administrative authority levels of SYSADM and DBADM
implicitly have all privileges on all objects except objects pertaining to database
security and control who will have access to the database manager and the extent
of this access.
Figure 5 illustrates the relationship between authorities and their span of control
(database, database manager).
SYSADM
CUSTOMER
SYSCTRL Database
authorities
SYSMAINT
SYSMON
EMPLOYEE
Database
authorities
Privileges:
Privileges are those activities that a user is allowed to perform. Authorized users
can create objects, have access to objects they own, and can pass on privileges on
their own objects to other users by using the GRANT statement.
Note: The CONTROL privilege only apples to tables, views, nicknames, indexes,
and packages.
If a different user requires the CONTROL privilege to that object, a user with
SYSADM or DBADM authority could grant the CONTROL privilege to that object.
The CONTROL privilege cannot be revoked from the object owner, however, the
object owner can be changed by using the TRANSFER OWNERSHIP statement.
Individual privileges and database authorities allow a specific function, but do not
include the right to grant the same privileges or authorities to other users. The
right to grant table, view, schema, package, routine, and sequence privileges to
others can be extended to other users through the WITH GRANT OPTION on the
GRANT statement. However, the WITH GRANT OPTION does not allow the
person granting the privilege to revoke the privilege once granted. You must have
SYSADM authority, DBADM authority, or the CONTROL privilege to revoke the
privilege.
Privileges on objects in a package or routine: When a user has the privilege to execute
a package or routine, they do not necessarily require specific privileges on the
objects used in the package or routine. If the package or routine contains static
SQL or XQuery statements, the privileges of the owner of the package are used for
those statements. If the package or routine contains dynamic SQL or XQuery
statements, the authorization ID used for privilege checking depends on the setting
of the DYNAMICRULES bind option of the package issuing the dynamic query
statements, and whether those statements are issued when the package is being
used in the context of a routine.
Revoking a privilege from an authorization name does not revoke that same
privilege from any other authorization names that were granted the privilege by
that authorization name. For example, assume that CLAIRE grants SELECT WITH
GRANT OPTION to RICK, then RICK grants SELECT to BOBBY and CHRIS. If
CLAIRE revokes the SELECT privilege from RICK, BOBBY and CHRIS still retain
the SELECT privilege.
Object ownership:
Note: One exception exists. If the AUTHORIZATION option is specified for the
CREATE SCHEMA statement, any other object that is created as part of the
CREATE SCHEMA operation is owned by the authorization ID specified by
the AUTHORIZATION option. Any objects that are created in the schema
after the initial CREATE SCHEMA operation, however, are owned by the
authorization ID associated with the specific CREATE statement.
Privileges are assigned to the object owner based on the type of object being
created:
v The CONTROL privilege is implicitly granted on newly created tables, indexes,
and packages. This privilege allows the object creator to access the database
object, and to grant and revoke privileges to or from other users on that object.
If a different user requires the CONTROL privilege to that object, a user with
SYSADM or DBADM authority must grant the CONTROL privilege to that
object. The CONTROL privilege cannot be revoked by the object owner.
v The CONTROL privilege is implicitly granted on newly created views if the
object owner has the CONTROL privilege on all the tables, views, and
nicknames referenced by the view definition.
v Other objects like triggers, routines, sequences, table spaces, and buffer pools do
not have a CONTROL privilege associated with them. The object owner does,
however, automatically receive each of the privileges associated with the object
(and can provide these privileges to other users, where supported, by using the
WITH GRANT option of the GRANT statement). In addition, the object owner
can alter, add a comment on, or drop the object. These authorizations are
implicit for the object owner and cannot be revoked.
Certain privileges on the object, such as altering a table, can be granted by the
owner, and can be revoked from the owner by a user who has SYSADM or
DBADM authority. Certain privileges on the object, such as commenting on a table,
cannot be granted by the owner and cannot be revoked from the owner. Use the
TRANSFER OWNERSHIP statement to move these privileges to another user.
When an object is created, the authorization ID of the statement is the owner of the
object. However, when a package is created and the OWNER bind option is
specified, the owner of objects created by the static SQL statements in the package
is the value of the OWNER bind option. In addition, if the AUTHORIZATION
clause is specified on a CREATE SCHEMA statement, the authorization name
specified after the AUTHORIZATION keyword is the owner of the schema.
A security administrator (SECADM) or the object owner can use the TRANSFER
OWNERSHIP statement to change the ownership of a database object. An
administrator can therefore create an object on behalf of an authorization ID, by
creating the object using the authorization ID as the qualifier, and then using the
Related concepts:
v “Controlling access to database objects” on page 519
v “Database administration authority (DBADM)” on page 509
v “Database authorities” on page 511
v “Index privileges” on page 518
v “Indirect privileges through a package” on page 523
v “LOAD authority” on page 511
v “Package privileges” on page 517
v “Routine privileges” on page 518
v “Schema privileges” on page 514
v “Security administration authority (SECADM)” on page 508
v “Sequence privileges” on page 518
v “System administration authority (SYSADM)” on page 506
v “System control authority (SYSCTRL)” on page 507
v “System maintenance authority (SYSMAINT)” on page 508
v “System monitor authority (SYSMON)” on page 510
v “Table and view privileges” on page 515
v “Table space privileges” on page 515
Related reference:
v “GRANT (Database Authorities) statement” in SQL Reference, Volume 2
Only a user with SYSADM authority can perform the following functions:
v Migrate a database
v Change the database manager configuration file (including specifying the groups
having SYSCTRL, SYSMAINT, or SYSMON authority)
v Grant and revoke DBADM authority.
v Grant and revoke SECADM authority
Note: When a user with SYSADM authority creates a database, that user is
automatically granted explicit DBADM authority on the database. If the
database creator is removed from the SYSADM group and you want to
prevent that user from accessing that database as a DBADM, you must
explicitly revoke the user’s DBADM authority.
Related concepts:
v “Data encryption” on page 527
v “Security administration authority (SECADM)” on page 508
v “System control authority (SYSCTRL)” on page 507
v “System maintenance authority (SYSMAINT)” on page 508
v “System monitor authority (SYSMON)” on page 510
In addition, a user with SYSCTRL authority can perform the functions of users
with system maintenance authority (SYSMAINT) and system monitor authority
(SYSMON).
Users with SYSCTRL authority also have the implicit privilege to connect to a
database.
Note: When users with SYSCTRL authority create databases, they are
automatically granted explicit DBADM authority on the database. If the
database creator is removed from the SYSCTRL group, and if you want to
also prevent them from accessing that database as a DBADM, you must
explicitly revoke this DBADM authority.
Only a user with SYSMAINT or higher system authority can do the following:
v Update database configuration files
v Back up a database or table space
v Restore to an existing database
v Perform roll forward recovery
v Start or stop an instance
v Restore a table space
v Run trace
v Take database system monitor snapshots of a database manager instance or its
databases.
Users with SYSMAINT authority also have the implicit privilege to connect to a
database, and can perform the functions of users with system monitor authority
(SYSMON).
Related concepts:
v “Database administration authority (DBADM)” on page 509
v “System monitor authority (SYSMON)” on page 510
Related concepts:
v “Database authorities” on page 511
v “System administration authority (SYSADM)” on page 506
v “System control authority (SYSCTRL)” on page 507
v “System maintenance authority (SYSMAINT)” on page 508
When DBADM authority is granted, the following database authorities are also
explicitly granted for the same database (and are not automatically revoked if the
DBADM authority is later revoked):
v BINDADD
v CONNECT
v CREATETAB
v CREATE_EXTERNAL
v ROUTINE, CREATE
v NOT_FENCED_ROUTINE
v IMPLICIT_SCHEMA
v QUIESCE_CONNECT
v LOAD
Only a user with SYSADM authority can grant or revoke DBADM authority. Users
with DBADM authority can grant privileges on the database to others and can
revoke any privilege from any user regardless of who granted it.
Holding the DBADM, or higher, authority for a database allows a user to perform
these actions on that database:
v Read log files
v Create, activate, and drop event monitors.
While DBADM authority does provide some of the same abilities as other
authorities, it does not provide any of the abilities of the SECADM authority. The
abilities provided by the SECADM authority are not provided by any other
authority.
Related concepts:
v “Database authorities” on page 511
v “Implicit schema authority (IMPLICIT_SCHEMA) considerations” on page 513
v “LOAD authority” on page 511
v “Security administration authority (SECADM)” on page 508
v “System administration authority (SYSADM)” on page 506
v “System control authority (SYSCTRL)” on page 507
v “System maintenance authority (SYSMAINT)” on page 508
SYSMON authority enables the user use the following SQL table functions:
v All snapshot table functions without previously running
SYSPROC.SNAP_WRITE_FILE
SYSPROC.SNAP_WRITE_FILE takes a snapshot and saves its content into a file. If
any snapshot table functions are called with null input parameters, the file
content is returned instead of a real-time system snapshot.
Users with the SYSADM, SYSCTRL, or SYSMAINT authority level also possess
SYSMON authority.
LOAD authority
Users having LOAD authority at the database level, as well as INSERT privilege on
a table, can use the LOAD command to load data into a table.
Users having LOAD authority at the database level, as well as INSERT privilege on
a table, can LOAD RESTART or LOAD TERMINATE if the previous load
operation is a load to insert data.
Users having LOAD authority at the database level, as well as the INSERT and
DELETE privileges on a table, can use the LOAD REPLACE command.
If the previous load operation was a load replace, the DELETE privilege must also
have been granted to that user before the user can LOAD RESTART or LOAD
TERMINATE.
If the exception tables are used as part of a load operation, the user must have
INSERT privilege on the exception tables.
The user with this authority can perform QUIESCE TABLESPACES FOR TABLE,
RUNSTATS, and LIST TABLESPACES commands.
Related concepts:
v “Table and view privileges” on page 515
v “Privileges, authorities, and authorizations required to use Load” in Data
Movement Utilities Guide and Reference
Related reference:
v “LIST TABLESPACES command” in Command Reference
v “LOAD command” in Command Reference
v “QUIESCE TABLESPACES FOR TABLE command” in Command Reference
v “RUNSTATS command” in Command Reference
Database authorities
Each database authority allows the authorization ID holding it to perform some
particular type of action on the database as a whole. Database authorities are
different from privileges, which allow a certain action to be taken on a particular
database object, such as a table or an index. There are ten different database
authorities.
SECADM
Gives the holder the ability to configure many things related to security of
the database, and also to transfer ownership of database objects. For
instance, all objects that are part of the label-based access control (LBAC)
feature can be created, dropped, granted, or revoked by a user that holds
SECADM authority. SECADM specific abilities cannot be exercised by any
other authority, not even SYSADM.
Attention:: The database manager does not protect its storage or control
blocks from UDFs or procedures that are “not fenced”. A user
with this authority must, therefore, be very careful to test their
UDF extremely well before registering it as “not fenced”.
IMPLICIT_SCHEMA
Allows any user to create a schema implicitly by creating an object using a
CREATE statement with a schema name that does not already exist.
SYSIBM becomes the owner of the implicitly created schema and PUBLIC
is given the privilege to create objects in this schema.
LOAD
Allows the holder to load data into a table
QUIESCE_CONNECT
Allows the holder to access the database while it is quiesced.
Only authorization IDs with the SYSADM authority can grant the SECADM and
DBADM authorities. All other authorities can be granted by authorization IDs that
hold SYSADM or DBADM authorities.
Related tasks:
Authorization ID privileges
Authorization ID privileges involve actions on authorization IDs. There is currently
only one such privilege: the SETSESSIONUSER privilege.
Note: When you migrate a DB2 UDB database to DB2 Version 9.1, authorization
IDs with explicit DBADM authority on that database will automatically be
granted SETSESSIONUSER privilege on PUBLIC. This prevents breaking
applications that rely on authorization IDs with DBADM authority being
able to set the session authorization ID to any authorization ID. This does
not happen when the authorization ID has SYSADM authority but has not
been explicitly granted DBADM.
Related concepts:
v “Authorization, privileges, and object ownership” on page 501
Related reference:
v “SET SESSION AUTHORIZATION statement” in SQL Reference, Volume 2
If control of who can implicitly create schema objects is required for the database,
IMPLICIT_SCHEMA database authority should be revoked from PUBLIC. Once
this is done, there are only three (3) ways that a schema object is created:
v Any user can create a schema using their own authorization name on a CREATE
SCHEMA statement.
v Any user with DBADM authority can explicitly create any schema which does
not already exist, and can optionally specify another user as the owner of the
schema.
v Any user with DBADM authority has IMPLICIT_SCHEMA database authority
(independent of PUBLIC) so that they can implicitly create a schema with any
name at the time they are creating other database objects. SYSIBM becomes the
owner of the implicitly created schema and PUBLIC has the privilege to create
objects in the schema.
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
(Sequences) CONTROL
(Table spaces)
(Indexes)
ALTER
USE
USAGE
CONTROL CONTROL
(Nicknames) (Views)
ALTER
DELETE DELETE
INDEX INSERT
INSERT SELECT
REFERENCES Database UPDATE
SELECT objects
UPDATE
CONTROL (Schema
(Packages) Owners)
BIND CONTROL
ALTERIN
EXECUTE (Tables)
CREATEIN
DROPIN
ALTER
(Procedures, DELETE
functions, methods) INDEX (Server)
INSERT
REFERENCES
SELECT
EXECUTE UPDATE PASSTHRU
The owner of the schema has all of these privileges and the ability to grant them to
others. The objects that are manipulated within the schema object include: tables,
views, indexes, packages, data types, functions, triggers, procedures, and aliases.
Related tasks:
v “Granting privileges” on page 519
Related reference:
v “ALTER SEQUENCE statement” in SQL Reference, Volume 2
The owner of the table space, typically the creator who has SYSADM or SYSCTRL
authority, has the USE privilege and the ability to grant this privilege to others. By
default, at database creation time the USE privilege for table space USERSPACE1 is
granted to PUBLIC, though this privilege can be revoked.
The USE privilege cannot be used with SYSCATSPACE or any system temporary
table spaces.
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
Related reference:
v “CREATE TABLE statement” in SQL Reference, Volume 2
The privilege to grant these privileges to others may also be granted using the
WITH GRANT OPTION on the GRANT statement.
Note: When a user or group is granted CONTROL privilege on a table, all other
privileges on that table are automatically granted WITH GRANT OPTION.
If you subsequently revoke the CONTROL privilege on the table from a
user, that user will still retain the other privileges that were automatically
granted. To revoke all the privileges that are granted with the CONTROL
privilege, you must either explicitly revoke each individual privilege or
specify the ALL keyword on the REVOKE statement, for example:
REVOKE ALL
ON EMPLOYEE FROM USER HERON
When working with typed tables, there are implications regarding table and view
privileges.
will return the object identifier and Employee_t attributes for both employees and
managers. Similarly, the update operation:
UPDATE Employee SET Salary = Salary + 1000
A user with SELECT privilege on Employee will be able to perform this SELECT
operation even if they do not have an explicit SELECT privilege on Manager.
However, such a user will not be permitted to perform a SELECT operation
directly on the Manager subtable, and will therefore not be able to access any of
the non-inherited columns of the Manager table.
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE VIEW statement” in SQL Reference, Volume 2
v “SELECT statement” in SQL Reference, Volume 2
Package privileges
A package is a database object that contains the information needed by the
database manager to access data in the most efficient way for a particular
application program. Package privileges enable a user to create and manipulate
packages. The user must have CONNECT authority on the database to use any of
the following privileges:
v CONTROL provides the user with the ability to rebind, drop, or execute a
package as well as the ability to extend those privileges to others. The creator of
a package automatically receives this privilege. A user with CONTROL privilege
is granted the BIND and EXECUTE privileges, and can also grant these
privileges to other users by using the GRANT statement. (If a privilege is
granted using WITH GRANT OPTION, a user who receives the BIND or
EXECUTE privilege can, in turn, grant this privilege to other users.) To grant
CONTROL privilege, the user must have SYSADM or DBADM authority.
v BIND privilege on a package allows the user to rebind or bind that package and
to add new package versions of the same package name and creator.
v EXECUTE allows the user to execute or run a package.
Note: All package privileges apply to all VERSIONs that share the same package
name and creator.
Related concepts:
v “Database authorities” on page 511
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
The table-level INDEX privilege allows a user to create an index on that table.
Related concepts:
v “Table and view privileges” on page 515
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
Sequence privileges
The creator of a sequence automatically receives the USAGE and ALTER privileges
on the sequence. The USAGE privilege is needed to use NEXT VALUE and
PREVIOUS VALUE expressions for the sequence. To allow other users to use the
NEXT VALUE and PREVIOUS VALUE expressions, sequence privileges must be
granted to PUBLIC. This allows all users to use the expressions with the specified
sequence.
ALTER privilege on the sequence allows the user to perform tasks such as
restarting the sequence or changing the increment for future sequence values. The
creator of the sequence can grant the ALTER privilege to other users, and if WITH
GRANT OPTION is used, these users can, in turn, grant these privileges to other
users.
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
Related reference:
v “ALTER SEQUENCE statement” in SQL Reference, Volume 2
Routine privileges
Execute privileges involve actions on all types of routines such as functions,
procedures, and methods within a database. Once having EXECUTE privilege, a
user can then invoke that routine, create a function that is sourced from that
routine (applies to functions only), and reference the routine in any DDL statement
such as CREATE VIEW or CREATE TRIGGER.
The user who defines the externally stored procedure, function, or method receives
EXECUTE WITH GRANT privilege. If the EXECUTE privilege is granted to
another user via WITH GRANT OPTION, that user can, in turn, grant the
EXECUTE privilege to another user.
Related concepts:
v “Using the system catalog for security issues” on page 609
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
Granting privileges
Restrictions:
To grant privileges on most database objects, the user must have SYSADM
authority, DBADM authority, or CONTROL privilege on that object; or, the user
must hold the privilege WITH GRANT OPTION. Privileges can be granted only on
existing objects. To grant CONTROL privilege to someone else, the user must have
SYSADM or DBADM authority. To grant DBADM authority, the user must have
SYSADM authority.
Procedure:
On operating systems where users and groups exist with the same name, you
should specify whether you are granting the privilege to the user or group. Both
the GRANT and REVOKE statements support the keywords USER and GROUP. If
these optional keywords are not used, the database manager checks the operating
system security facility to determine whether the authorization name identifies a
user or a group. If the authorization name could be both a user and a group, an
error is returned.
The following example grants SELECT privileges on the EMPLOYEE table to the
user HERON:
GRANT SELECT
ON EMPLOYEE TO USER HERON
The following example grants SELECT privileges on the EMPLOYEE table to the
group HERON:
GRANT SELECT
ON EMPLOYEE TO GROUP HERON
In the Control Center, you can use the Schema Privileges notebook, the Table Space
Privileges notebook, and the View Privileges notebook to grant and revoke
privileges for these database objects. To open one of these notebooks, follow these
steps:
1. In the Control Center, expand the object tree until you find the folder
containing the objects you want to work with, for example, the Views folder.
2. Click the folder.
Any existing database objects in this folder are displayed in the contents pane.
3. Right-click the object of interest in the contents pane and select Privileges in
the pop-up menu.
The appropriate Privileges notebook opens.
Related concepts:
v “Controlling access to database objects” on page 519
Related tasks:
v “Revoking privileges” on page 521
Related reference:
v “GRANT (Database Authorities) statement” in SQL Reference, Volume 2
v “GRANT (Index Privileges) statement” in SQL Reference, Volume 2
v “GRANT (Package Privileges) statement” in SQL Reference, Volume 2
v “GRANT (Routine Privileges) statement” in SQL Reference, Volume 2
v “GRANT (Schema Privileges) statement” in SQL Reference, Volume 2
v “GRANT (Sequence Privileges) statement” in SQL Reference, Volume 2
v “GRANT (Server Privileges) statement” in SQL Reference, Volume 2
v “GRANT (Table Space Privileges) statement” in SQL Reference, Volume 2
v “GRANT (Table, View, or Nickname Privileges) statement” in SQL Reference,
Volume 2
Restrictions:
If an explicitly granted table (or view) privilege is revoked from a user with
DBADM authority, privileges will not be revoked from other views defined on that
table. This is because the view privileges are available through the DBADM
authority and are not dependent on explicit privileges on the underlying tables.
Procedure:
If a privilege has been granted to both a user and a group with the same name,
you must specify the GROUP or USER keyword when revoking the privilege. The
following example revokes the SELECT privilege on the EMPLOYEE table from the
user HERON:
REVOKE SELECT
ON EMPLOYEE FROM USER HERON
The following example revokes the SELECT privilege on the EMPLOYEE table
from the group HERON:
REVOKE SELECT
ON EMPLOYEE FROM GROUP HERON
Note that revoking a privilege from a group may not revoke it from all members
of that group. If an individual name has been directly granted a privilege, it will
keep it until that privilege is directly revoked.
If a table privilege is revoked from a user, privileges are also revoked on any view
created by that user which depends on the revoked table privilege. However, only
the privileges implicitly granted by the system are revoked. If a privilege on the
view was granted directly by another user, the privilege is still held.
You may have a situation where you want to GRANT a privilege to a group and
then REVOKE the privilege from just one member of the group. There are only a
couple of ways to do that without receiving the error message SQL0556N:
v You can remove the member from the group; or, create a new group with fewer
members and GRANT the privilege to the new group.
v You can REVOKE the privilege from the group and then GRANT it to individual
users (authorization IDs).
All packages that are dependent on revoked privileges are marked invalid, but can
be validated if rebound by a user with appropriate authority. Packages can also be
rebuilt if the privileges are subsequently granted again to the binder of the
application; running the application will trigger a successful implicit rebind. If
privileges are revoked from PUBLIC, all packages bound by users having only
been able to bind based on PUBLIC privileges are invalidated. If DBADM
authority is revoked from a user, all packages bound by that user are invalidated
including those associated with database utilities. Attempting to use a package that
has been marked invalid causes the system to attempt to rebind the package. If
this rebind attempt fails, an error occurs (SQLCODE -727). In this case, the
packages must be explicitly rebound by a user with:
v Authority to rebind the packages
v Appropriate authority for the objects used within the packages
These packages should be rebound at the time the privileges are revoked.
If you define a trigger or SQL function based on one or more privileges and you
lose one or more of these privileges, the trigger or SQL function cannot be used.
Related tasks:
v “Granting privileges” on page 519
Related reference:
v “REVOKE (Database Authorities) statement” in SQL Reference, Volume 2
v “REVOKE (Index Privileges) statement” in SQL Reference, Volume 2
v “REVOKE (Package Privileges) statement” in SQL Reference, Volume 2
v “REVOKE (Routine Privileges) statement” in SQL Reference, Volume 2
v “REVOKE (Schema Privileges) statement” in SQL Reference, Volume 2
v “REVOKE (Server Privileges) statement” in SQL Reference, Volume 2
v “REVOKE (Table Space Privileges) statement” in SQL Reference, Volume 2
v “REVOKE (Table, View, or Nickname Privileges) statement” in SQL Reference,
Volume 2
When the created object is a table, nickname, index, or package, the user receives
CONTROL privilege on the object. When the object is a view, the CONTROL
privilege for the view is granted implicitly only if the user has CONTROL
privilege for all tables, views, and nicknames referenced in the view definition.
522 Administration Guide: Implementation
When the object explicitly created is a schema, the schema owner is given
ALTERIN, CREATEIN, and DROPIN privileges WITH GRANT OPTION. An
implicitly created schema has CREATEIN granted to PUBLIC.
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
Not all operating systems that can bind a package using DB2 database products
support the OWNER option.
Related reference:
v “BIND command” in Command Reference
v “PRECOMPILE command” in Command Reference
Privileges granted to individuals binding the package and to PUBLIC are used for
authorization checking when static SQL and XQuery statements are bound.
Privileges granted through groups are not used for authorization checking when
static SQL and XQuery statements are bound. The user with a valid authID who
binds a package must either have been explicitly granted all the privileges required
to execute the static SQL or XQuery statements in the package or have been
implicitly granted the necessary privileges through PUBLIC unless VALIDATE
RUN was specified when binding the package. If VALIDATE RUN was specified at
BIND time, all authorization failures for any static SQL or XQuery statements
within this package will not cause the BIND to fail, and those SQL or XQuery
statements are revalidated at run time. PUBLIC, group, and user privileges are all
used when checking to ensure the user has the appropriate authorization (BIND or
BINDADD privilege) to bind the package.
Packages may include both static and dynamic SQL and XQuery statements. To
process a package with static queries, a user need only have EXECUTE privilege
on the package. This user can then indirectly obtain the privileges of the package
binder for any static queries in the package but only within the restrictions
imposed by the package.
Related concepts:
v “Indirect privileges through a package containing nicknames” on page 524
v “Effect of DYNAMICRULES bind option on dynamic SQL” in Developing
Embedded SQL Applications
Related reference:
v “BIND command” in Command Reference
For example, assume that a package creator’s .SQC file contains several SQL or
XQuery statements. One static statement references a local table. Another dynamic
statement references a nickname. When the package is bound, the package
creator’s authid is used to verify privileges for the local table and the nickname,
but no checking is done for the data source objects that the nickname identifies.
When another user executes the package, assuming they have the EXECUTE
privilege for that package, that user does not have to pass any additional privilege
checking for the statement referencing the table. However, for the statement
referencing the nickname, the user executing the package must pass authentication
checking and privilege checking at the data source.
When the .SQC file contains only dynamic SQL and XQuery statements and a
mixture of table and nickname references, DB2 database authorization checking for
local objects and nicknames is similar. Package users must pass privilege checking
for any local objects (tables, views) within the statements and also pass privilege
checking for nickname objects (package users must pass authentication and
privilege checking at the data source containing the objects that the nicknames
identify). In both cases, users of the package must have the EXECUTE privilege.
The ID and password of the package executor is used for all data source
authentication and privilege processing. This information can be changed by
creating a user mapping.
Note: Nicknames cannot be specified in static SQL and XQuery statements. Do not
use the DYNAMICRULES option (set to BIND) with packages containing
nicknames.
Note: Because you can create a view that contains nickname references for more
than one data source, your users can access data in multiple data sources
from one view. These views are called multi-location views. Such views are
useful when joining information in columns of sensitive tables across a
distributed environment or when individual users lack the privileges
needed at data sources for specific objects.
If you are creating views that reference nicknames, you do not need additional
authority on the data source objects (tables and views) referenced by nicknames in
the view; however, users of the view must have SELECT authority or the
equivalent authorization level for the underlying data source objects when they
access the view.
If your users do not have the proper authority at the data source for underlying
objects (tables and views), you can:
1. Create a data source view over those columns in the data source table that are
OK for the user to access
2. Grant the SELECT privilege on this view to users
3. Create a nickname to reference the view
Users can then access the columns by issuing a SELECT statement that references
the new nickname.
The following scenario provides a more detailed example of how views can be
used to restrict access to information.
Many people might require access to information in the STAFF table, for different
reasons. For example:
v All users need to be able to locate other employees. This requirement can be met
by creating a view on the NAME column of the STAFF table and the
LOCATION column of the ORG table, and by joining the two tables on their
respective DEPT and DEPTNUMB columns:
CREATE VIEW EMPLOCS AS
SELECT NAME, LOCATION FROM STAFF, ORG
WHERE STAFF.DEPT=ORG.DEPTNUMB
GRANT SELECT ON TABLE EMPLOCS TO PUBLIC
Users who access the employee location view will see the following information:
NAME LOCATION
Molinare New York
Lu New York
Daniels New York
Jones New York
Hanes Boston
Rothman Boston
Ngan Boston
Kermisch Boston
Sanders Washington
Pernal Washington
James Washington
Sneider Washington
Marenghi Atlanta
Related tasks:
v “Creating a view” on page 251
v “Granting privileges” on page 519
Related concepts:
v “Introduction to the DB2 database audit facility” on page 621
Data encryption
One part of your security plan may involve encrypting your data. To do this, you
can use encryption and decryption built-in functions: ENCRYPT, DECRYPT_BIN,
DECRYPT_CHAR, and GETHINT.
The result of the ENCRYPT functions is VARCHAR FOR BIT DATA (with a limit of
32 631).
The length of the result depends on the bytes to the next 8 byte boundary. The
length of the result could be the length of the data argument plus 40 plus the
number of bytes to the next 8 byte boundary when the optional hint parameter is
specified. Or, the length of the result could be the length of the data argument plus
8 plus the number of bytes to the next 8 byte boundary when the optional hint
parameter is not specified.
The password that is used to encrypt the data is determined in one of two ways:
v Password Argument. The password is a string that is explicitly passed when the
ENCRYPT function is invoked. The data is encrypted and decrypted with the
given password.
v Encryption password special register. The SET ENCRYPTION PASSWORD
statement encrypts the password value and sends the encrypted password to the
database manager to store in a special register. ENCRYPT, DECRYPT_BIN and
DECRYPT_CHAR functions invoked without a password parameter use the
value in the ENCRYPTION PASSWORD special register. The ENCRYPTION
PASSWORD special register is only stored in encrypted form.
The initial or default value for the special register is an empty string.
Valid lengths for passwords are between 6 and 127 inclusive. Valid lengths for
hints are between 0 and 32 inclusive.
Related reference:
v “DECRYPT_BIN and DECRYPT_CHAR scalar functions” in SQL Reference,
Volume 1
v “ENCRYPT scalar function” in SQL Reference, Volume 1
v “GETHINT scalar function” in SQL Reference, Volume 1
v “SET ENCRYPTION PASSWORD statement” in SQL Reference, Volume 2
Prerequisites:
Procedure:
Related tasks:
v “Granting database authorities to new users” on page 529
v “Granting privileges to new groups” on page 530
v “Granting privileges to new users” on page 534
Prerequisites:
Related tasks:
v “Granting privileges to new users” on page 534
v “Granting database authorities to new groups” on page 529
v “Granting privileges to new groups” on page 530
Prerequisites:
Example
Example:
Procedure:
1. Open the Add Group notebook: From the Control Center, expand the object
tree until you find the Databases folder. Open the Databases folder. Any
existing databases are displayed in the object tree. Click the database you want
and locate the User and Group Objects folder. Click the User and Group
Related concepts:
v “Authorization ID privileges” on page 513
v “Authorization, privileges, and object ownership” on page 501
v “Controlling access to database objects” on page 519
Related tasks:
v “Revoking privileges” on page 521
v “Granting database authorities to new groups” on page 529
v “Granting privileges” on page 519
Prerequisites:
Example
Example
Procedure:
1. Open the Add User notebook: From the Control Center window, expand the
object tree until you find the User and Group Objects folder below the
database that you’re authorizing a user to use. Click on this folder. The DB
Related concepts:
v “Authorization ID privileges” on page 513
v “Authorization, privileges, and object ownership” on page 501
v “Controlling access to database objects” on page 519
Related tasks:
v “Granting database authorities to new users” on page 529
v “Granting privileges” on page 519
v “Retrieving all privileges granted to users” on page 613
v “Revoking privileges” on page 521
Label-based access control (LBAC) greatly increases the control you have over who
can access your data. LBAC lets you decide exactly who has write access and who
has read access to individual rows and individual columns.
The LBAC capability is very configurable and can be tailored to match your
particular security environment. All LBAC configuration is performed by a security
administrator, which is a user that has been granted the SECADM authority by the
system administrator.
Once created, a security label can be associated with individual columns and rows
in a table to protect the data held there. Data that is protected by a security label is
called protected data. A security administrator allows users access to protected data
by granting them security labels. When a user tries to access protected data, that
user's security label is compared to the security label protecting the data. The
protecting label will block some security labels and not block others.
A user is allowed to hold security labels for multiple security policies at once. For
any given security policy, however, a user can hold at most one label for read
access and one label for write access.
If you try to access a protected column that your LBAC credentials do not allow
you to access then the access will fail and you will get an error message.
If you try to read protected rows that your LBAC credentials do not allow you to
read then DB2 acts as if those rows do not exist. Those rows cannot be selected as
part of any SQL statement that you run, including SELECT, UPDATE, or DELETE.
Even the aggregate functions ignore rows that your LBAC credentials do not allow
you to read. The COUNT(*) function, for example, will return a count only of the
rows that you have read access to.
You can define a view on a protected table the same way you can define one on a
non-protected table. When such a view is accessed the LBAC protection on the
underlying table is enforced. The LBAC credentials used are those of the session
authorization ID. Two users accessing the same view might see different rows
depending on their LBAC credentials.
The following rules explain how LBAC rules are enforced in the presence of
referential integrity constraints:
v Rule 1: The LBAC read access rules are NOT applied for internally generated
scans of child tables. This is to avoid having orphan children.
v Rule 2: The LBAC read access rules are NOT applied for internally generated
scans of parent tables
v Rule 3: The LBAC write rules are applied when a CASCADE operation is
performed on child tables. For example, If a user deletes a parent, but cannot
Example: If you do not have permission to read from a table then you will not
be allowed to read data from that table--even the rows and columns
to which LBAC would otherwise allow you access.
v Your LBAC credentials only limit your access to protected data. They have no
effect on your access to unprotected data.
v LBAC credentials are not checked when you drop a table or a database, even if
the table or database contains protected data.
v LBAC credentials are not checked when you back up your data. If you can run a
backup on a table, which rows are backed up is not limited in any way by the
LBAC protection on the data. Also, data on the backup media is not protected
by LBAC. Only data in the database is protected.
v LBAC cannot be used to protect any of the following types of tables:
– A materialized query table (MQT)
– A table that a materialized query table (MQT) depends on
– A staging table
– A table that a staging table depends on
– A typed table
v LBAC protection cannot be applied to a nickname.
LBAC tutorial:
A tutorial leading you through the basics of using LBAC is available online. The
tutorial is part of the IBM developerWorks website (http://www.ibm.com/
developerworks/db2) and is called DB2 Label-Based Access Control, a practical
guide.
Related concepts:
v “Database authorities” on page 511
v “LBAC security label components overview” on page 541
v “LBAC security labels” on page 547
v “LBAC security policies” on page 540
Every protected table must have one and only one security policy associated with
it. Rows and columns in that table can only be protected with security labels that
Security policies cannot be altered. The only way to change a security policy is to
drop it and re-create it.
You must be a security administrator to drop a security policy. You drop a security
policy using the SQL statement DROP.
You cannot drop a security policy if it is associated with (added to) any table.
Related concepts:
v “Label-based access control (LBAC) overview” on page 538
Related reference:
v “CREATE SECURITY LABEL COMPONENT statement” in SQL Reference, Volume
2
v “CREATE SECURITY POLICY statement” in SQL Reference, Volume 2
v “DROP statement” in SQL Reference, Volume 2
A component can represent any criteria that you might use to decide if a user
should have access to a given piece of data. Typical examples of such criteria
include:
v How well trusted the user is
v What department the user is in
v Whether the user is involved in a particular project
Example: If you want the department that a user is in to affect which data they
can access, you could create a component named dept and define
Example: A security label component that represents a level of trust might have
the four elements: Top Secret, Secret, Classified, and Unclassified.
Types of components:
The details of each type, including detailed descriptions of the relationships that
the elements can have with each other, are described in their own section.
Security label components cannot be altered. The only way to change a security
label component is to drop it and re-create it.
You must be a security administrator to drop a security label component. You drop
a security label component with the SQL statement DROP.
Related concepts:
v “LBAC security label component type: ARRAY” on page 543
v “LBAC security label component type: SET” on page 543
Related reference:
v “CREATE SECURITY LABEL COMPONENT statement” in SQL Reference, Volume
2
v “DROP statement” in SQL Reference, Volume 2
Related concepts:
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security label components overview” on page 541
v “LBAC security policies” on page 540
Related reference:
v “CREATE SECURITY LABEL COMPONENT statement” in SQL Reference, Volume
2
Then the elements are treated as if they are organized in a structure like
this:
Highest
Top Secret
Secret
Employee
Public
Lowest
Related concepts:
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security label components overview” on page 541
v “LBAC security policies” on page 540
Related reference:
v “CREATE SECURITY LABEL COMPONENT statement” in SQL Reference, Volume
2
TREE is one type of security label component that can be used in a label-based
access control (LBAC) security policy. In this type of component the elements are
treated as if they are arranged in a tree structure. When you specify an element
that is part of a component of type TREE you must also specify which other
element it is under. The one exception is the first element which must be specified
as being the ROOT of the tree. This allows you to organize the elements in a tree
structure.
Then the elements are treated as if they are organized in a tree structure
like this:
Publishing Software
Business
Home Sales
Sales
In a component of type TREE, the elements can have these types of relationships to
each other:
Parent Element A is a parent of element B if element B is UNDER element A.
Example: This diagram shows the parent of the Business Sales element:
Corporate
Publishing Software
Business
Home Sales
Sales
Publishing Software
Business
Home Sales
Sales
Sibling
Two elements are siblings of each other if they have the same parent.
Corporate
Publishing Software
Business
Home Sales
Sales
Ancestor
Element A is an ancestor of element B if it is the parent of B, or if it is the
parent of the parent of B, and so on. The root element is an ancestor of all
other elements in the tree.
Example: This diagram shows the ancestors of the Home Sales element:
Publishing Software
Business
Home Sales
Sales
Descendent
Element A is a descendent of element B if it is the child of B, or if it is the
child of a child of B, and so on.
Corporate
Publishing Software
Business
Home Sales
Sales
Related concepts:
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security label components overview” on page 541
v “LBAC security policies” on page 540
Related reference:
v “CREATE SECURITY LABEL COMPONENT statement” in SQL Reference, Volume
2
Every security label is part of exactly one security policy and includes one value
for each component in that security policy. A value in the context of a security label
component is a list of zero or more of the elements allowed by that component.
Values for ARRAY type components can contain zero or one element, values for
other types can have zero or more elements. A value that does not include any
elements is called an empty value.
Example: If a TREE type component has the three elements Human Resources,
Sales, and Shipping then these are some of the valid values for that
component:
v Human Resources (or any of the elements by itself)
v Human Resources, Shipping (or any other combination of the
elements as long as no element is included more than once)
v An empty value
Whether a particular security label will block another is determined by the values
of each component in the labels and the LBAC rule set that is specified in the
security policy of the table. The details of how the comparison is made are given
in the section How LBAC security labels are compared.
When security labels are converted to a text string they use the format described in
the section Format for security label values.
Security labels cannot be altered. The only way to change a security label is to
drop it and re-create it.
You must be a security administrator to drop a security label. You drop a security
label with the SQL statement DROP. You cannot drop a security label that is being
used to protect data anywhere in the database or that is currently held by one or
more users.
Related concepts:
v “How LBAC security labels are compared” on page 550
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security label components overview” on page 541
v “Protection of data using LBAC” on page 558
Related reference:
v “CREATE SECURITY LABEL statement” in SQL Reference, Volume 2
v “Format for security label values” on page 549
Example: A security label is part of a security policy that has these three
components in this order: Level, Department, and Projects. The security
label has these values:
Table 29.
Component Values
Level Secret
Department Empty value
Related concepts:
v “How LBAC security labels are compared” on page 550
v “LBAC security label components overview” on page 541
v “LBAC security labels” on page 547
There are only two types of comparison that can be made. Your LBAC credentials
can be compared to a single security label for read access or your LBAC credentials
compared to a single security label for write access. Updating and deleting are
treated as being a read followed by a write. When an operation requires multiple
comparisons to be made, each is made separately.
Even though you might hold multiple security labels only one is compared to the
protecting security label. The label used is the one that meets these criteria:
v It is part of the security policy that is protecting the table being accessed.
v It was granted for the type of access (read or write).
If you do not have a security label that meets these criteria then a default security
label is assumed that has empty values for all components.
Security labels are compared component by component. If a security label does not
have a value for one of the components then an empty value is assumed. As each
component is examined, the appropriate rules of the LBAC rule set are used to
decide if the elements in your value for that component should be blocked by the
elements in the value for the same component in the protecting label. If any of
your values are blocked then your LBAC credentials are blocked by the protecting
security label.
The LBAC rule set used in the comparison is designated in the security policy. To
find out what the rules are and when each one is used, see the description of that
rule set.
Example: The LBAC rule set is DB2LBACRULES and the security policy has two
components. One component is of type ARRAY and the other is of type
TREE. The user has been granted an exemption on the rule
DB2LBACREADTREE, which is the rule used for read access when
comparing values of components of type TREE. If the user attempts to
read protected data then whatever value the user has for the TREE
component, even if it is an empty value, will not block access because
that rule is not used. Whether the user can read the data depends
entirely on the values of the ARRAY component of the labels.
Related concepts:
v “Label-based access control (LBAC) overview” on page 538
v “LBAC rule exemptions” on page 556
v “LBAC rule set: DB2LBACRULES” on page 552
v “LBAC rule sets overview” on page 551
v “LBAC security labels” on page 547
v “LBAC security policies” on page 540
Each LBAC rule set is identified by a unique name. When you create a security
policy you must specify the LBAC rule set that will be used with that policy. Any
comparison of security labels that are part of that policy will use that LBAC rule
set.
Each rule in a rule set is also identified by a unique name. You use the name of a
rule when you are granting an exemption on that rule.
How many rules are in a set and when each rule is used can vary from rule set to
rule set.
There is currently only one supported LBAC rule set. The name of that rule set is
DB2LBACRULES.
Related concepts:
v “Label-based access control (LBAC) overview” on page 538
v “LBAC rule set: DB2LBACRULES” on page 552
Write-up and write-down apply only to components of type ARRAY and only to
write access. Write up occurs when the value protecting data that you are writing
to is higher than your value. Write-down is when the value protecting the data is
lower than yours. By default neither write-up nor write-down is allowed, meaning
that you can only write data that is protected by the same value that you have.
When comparing two values for the same component, which rules are used
depends on the type of the component (ARRAY, SET, or TREE) and what type of
access is being attempted (read, or write). This table lists the rules, tells when each
is used, and describes how the rule determines if access is blocked.
Table 30. Summary of the DB2LBACRULES rules
Used when
comparing the Used when
values of this attempting
type of this type of
Rule name component access Access is blocked when this condition is met
DB2LBACREADARRAY ARRAY Read The user’s value is lower than the protecting value.
DB2LBACREADSET SET Read There are one or more protecting values that the user
does not hold.
DB2LBACREADTREE TREE Read None of the user’s values is equal to or an ancestor of
one of the protecting values.
DB2LBACWRITEARRAY ARRAY Write The user’s value is higher than the protecting value or
lower than the protecting value.1
DB2LBACWRITESET SET Write There are one or more protecting values that the user
does not hold.
DB2LBACWRITETREE TREE Write None of the user’s values is equal to or an ancestor of
one of the protecting values.
Notes:
1. The DB2LBACWRITEARRAY rule can be thought of as being two different
rules combined. One prevents writing to data that is higher than your level
(write-up) and the other prevents writing to data that is lower than your level
(write-down). When granting an exemption to this rule you can exempt the
user from either of these rules or from both.
All rules treat empty values the same way. An empty value blocks no other values
and is blocked by any non-empty value.
Examples:
These examples are valid for a user trying to read or trying to write
protected data. They assume that the values are for a component of
These examples are valid for both read access and write access. They
assume that the values are for a component of type TREE that was
defined in this way:
CREATE SECURITY LABEL COMPONENT mycomp
TREE (
’Corporate’ ROOT,
’Publishing’ UNDER ’Corporate’,
’Software’ UNDER ’Corporate’,
’Development’ UNDER ’Software’,
’Sales’ UNDER ’Software’,
’Support’ UNDER ’Software’
’Business Sales’ UNDER ’Sales’
’Home Sales’ UNDER ’Sales’
)
Corporate
Publishing Software
Business
Home Sales
Sales
DB2LBACREADARRAY examples:
These examples are for read access only. They assume that the values
are for a component of type ARRAY that includes these elements in this
arrangement:
Highest
Top Secret
Secret
Employee
Public
Lowest
DB2LBACWRITEARRAY examples:
These examples are for write access only. They assume that the values
are for a component of type ARRAY that includes these elements in this
arrangement:
Highest
Top Secret
Secret
Employee
Public
Lowest
Related concepts:
v “How LBAC security labels are compared” on page 550
An LBAC rule exemption is part of the label-based access control (LBAC) feature.
When you hold an exemption on a particular rule of a particular security policy
that rule is not enforced when you try to access data protected by that security
policy. An exemption has no effect when comparing security labels of any security
policy other than the one for which it was granted.
Example:
You can hold multiple exemptions. If you hold an exemption to every rule used by
a security policy then you will have complete access to all data protected by that
security policy.
You must have security administrator (SECADM) authority to grant an LBAC rule
exemption. To grant an LBAC rule exemption, use the SQL statement GRANT
EXEMPTION ON RULE.
When you grant an LBAC rule exemption you provide this information:
v The rule or rules that the exemption is for
v The security policy that the exemption is for
v The user to which you are granting the exemption
Important: LBAC rule exemptions provide very powerful access. Do not grant
them without careful consideration.
Related concepts:
v “How LBAC security labels are compared” on page 550
v “LBAC rule sets overview” on page 551
Related reference:
v “GRANT (Exemption) statement” in SQL Reference, Volume 2
v “REVOKE (Exemption) statement” in SQL Reference, Volume 2
SECLABEL:
Example: Table T1 has two columns, the first has a data type of
DB2SECURITYLABEL and the second has a data type of INTEGER. T1 is
protected by security policy P1, which has three security label
components: level, departments, and groups. If UNCLASSIFIED is an
element of the component level, ALPHA and SIGMA are both elements
of the component departments, and G2 is an element of the component
groups then a security label could be inserted like this:
INSERT INTO T1 VALUES ( SECLABEL( 'P1', 'UNCLASSIFIED:(ALPHA,SIGMA):G2' ), 22 )
SECLABEL_BY_NAME:
This built-in function accepts the name of a security policy and the name of a
security label that is part of that security policy. It then returns the indicated
security label as a DB2SECURITYLABEL. You must use this function when
inserting an existing security label into a column that has a data type of
DB2SECURITYLABEL.
Example: Table T1 has two columns, the first has a data type of
DB2SECURITYLABEL and the second has a data type of INTEGER. The
security label named L1 is part of security policy P1. This SQL inserts
the security label:
INSERT INTO T1 VALUES ( SECLABEL_BY_NAME( 'P1', 'L1' ), 22 )
SECLABEL_TO_CHAR:
This built-in function returns a string representation of the values that make up a
security label.
Component Elements
level SECRET
departments DELTA and SIGMA
groups G3
A user that has LBAC credentials that allow reading the row executes
this SQL statement:
SELECT SECLABEL_TO_CHAR( 'P1', C1 ) AS C1 FROM T1
'SECRET:(DELTA,SIGMA):G3'
Related concepts:
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security labels” on page 547
Related reference:
v “SECLABEL_BY_NAME scalar function” in SQL Reference, Volume 1
v “SECLABEL_TO_CHAR scalar function” in SQL Reference, Volume 1
v “SECLABEL scalar function” in SQL Reference, Volume 1
v “Format for security label values” on page 549
Label-based access control (LBAC) can be used to protect rows of data, columns of
data, or both. Data in a table can only be protected by security labels that are part
of the security policy protecting the table. Data protection, including adding a
security policy, can be done when creating the table or later by altering the table.
You can add a security policy to a table and protect data in that table as part of the
same CREATE TABLE or ALTER TABLE statement.
As a general rule you are not allowed to protect data in such a way that your
current LBAC credentials do not allow you to write to that data.
You can add a security policy to a table when you create the table by using the
SECURITY POLICY clause of the CREATE TABLE statement. You can add a
security policy to an existing table by using the ADD SECURITY POLICY clause of
the ALTER TABLE statement. You do not need to have SECADM authority or have
LBAC credentials to add a security policy to a table.
Protecting rows:
You can allow protected rows in a new table by including a column with a data
type of DB2SECURITYLABEL when you create the table. The CREATE TABLE
statement must also add a security policy to the table. You do not need to have
SECADM authority or have any LBAC credentials to create such a table.
You can allow protected rows in an existing table by adding a column that has a
data type of DB2SECURITYLABEL. To add such a column, either the table must
already be protected by a security policy or the ALTER TABLE statement that adds
the column must also add a security policy to the table. When the column is
added, the security label you hold for write access is used to protect all existing
rows. If you do not hold a security label for write access that is part of the security
policy protecting the table then you cannot add a column that has a data type of
DB2SECURITYLABEL.
After a table has a column of type DB2SECURITYLABEL you protect each new
row of data by storing a security label in that column. The details of how this
works are described in the topics about inserting and updating LBAC protected
data. You must have LBAC credentials to insert rows into a table that has a column
of type DB2SECURITYLABEL.
Protecting columns:
You can protect a column when you create the table by using the SECURED WITH
column option of the CREATE TABLE statement. You can add protection to an
existing column by using the SECURED WITH option in an ALTER TABLE
statement.
To protect a column with a particular security label you must have LBAC
credentials that allow you to write to data protected by that security label. You do
not have to have SECADM authority.
Columns can only be protected by security labels that are part of the security
policy protecting the table. You cannot protect columns in a table that has no
security policy. You are allowed to protect a table with a security policy and
protect one or more columns in the same statement.
You can protect any number of the columns in a table but a column can be
protected by no more than one security label.
Related concepts:
v “Inserting of LBAC protected data” on page 563
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security labels” on page 547
v “LBAC security policies” on page 540
v “Removal of LBAC protection from data” on page 572
v “Updating of LBAC protected data” on page 565
In the case of a protected column the protecting security label is defined in the
schema of the table. The protecting security label for that column is the same for
every row in the table. In the case of a protected row the protecting security label
is stored in the row in a column of type DB2SECURITYLABEL. It can be different
for every row in the table.
The details of how your LBAC credentials are compared to a security label are
given in How LBAC security labels are compared.
When you try to read from a protected column your LBAC credentials are
compared with the security label protecting the column. Based on this comparison
access will either be blocked or allowed. If access is blocked then an error is
returned and the statement fails. Otherwise, the statement proceeds as usual.
Trying to read a column that your LBAC credentials do not allow you to read,
causes the entire statement to fail.
Example:
Assume that user Jyoti has LBAC credentials for reading that allow
access to security label L1 but not to L2. If Jyoti issues the following SQL
statement, the statement will fail:
SELECT * FROM T1
The only protected column in the SELECT clause is C1, and Jyoti's
LBAC credentials allow her to read that column.
If you do not have LBAC credentials that allow you to read a row it is as if that
row does not exist for you.
Depending on their LBAC credentials, different users might see different rows in a
table that has protected rows. For example, two users executing the statement
SELECT COUNT(*) FROM T1 may get different results if T1 has protected rows and
the users have different LBAC credentials.
Your LBAC credentials affect not only SELECT statements but also other SQL
statements like UPDATE, and DELETE. If you do not have LBAC credentials that
allow you to read a row, you cannot affect that row.
Example:
Assume that user Dan has LBAC credentials that allow him to read data
that is protected by security label L1 but not data protected by L2 or L3.
The SELECT statement returns only the row for Miller. No error
messages or warning are returned.
The rows for Rjaibi, Fielding, and Bird are not returned because read
access is blocked by their security labels. Dan cannot delete or update
these rows. They will also not be included in any aggregate functions.
For Dan it is as if those rows do not exist.
The statement returns a value of 1 because only the row for Miller can
be read by the user Dan.
Example:
Assume that user Sakari has LBAC credentials that allow reading data
protected by security label L1 but not L2 or L3.
The statement fails because the SELECT clause uses the wildcard (*)
which includes the column DEPTNO. The column DEPTNO is protected
by security label L2, which Sakari's LBAC credentials do not allow her
to read.
The select clause does not include any columns that Sakari is not able to
read so the statement continues. Only one row is returned, however,
because each of the other rows is protected by security label L2 or L3.
Table 38.
LASTNAME ROWSECURITYLABEL
Miller L1
Related concepts:
v “Deleting or dropping of LBAC protected data” on page 569
v “How LBAC security labels are compared” on page 550
v “Inserting of LBAC protected data” on page 563
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security labels” on page 547
v “Updating of LBAC protected data” on page 565
When you try to explicitly insert data to a protected column your LBAC
credentials for writing are compared with the security label protecting that column.
Based on this comparison access will either be blocked or allowed.
The details of how two security labels are compared are given in How LBAC
security labels are compared.
If access is allowed, the statement proceeds as usual. If access is blocked, then the
insert fails and an error is returned.
If you are inserting a row but do not provide a value for a protected column then
a default value is inserted if one is available. This happens even if your LBAC
credentials do not allow write access to that column. A default is available in the
following cases:
v The column was declared with the WITH DEFAULT option
v The column is a generated column
v The column has a default value that is given through a BEFORE trigger
v The column has a data type of DB2SECURITYLABEL, in which case security
label that you hold for write access is the default value
When you insert a new row into a table with protected rows you do not have to
provide a value for the column that is of type DB2SECURITYLABEL. If you do not
provide a value for that column the column is automatically populated with the
security label you have been granted for write access. If you have not been granted
a security label for write access an error is returned and the insert fails.
By using built-in functions like SECLABEL you can explicitly provide a security
label to be inserted in a column of type DB2SECURITYLABEL. The provided
security label is only used, however, if your LBAC credentials would allow you to
write to data that is protected with the security label you are trying to insert.
If you provide a security label that you would not be able to write to then what
happens depends on the security policy that is protecting the table. If the CREATE
SECURITY POLICY statement that created the policy included the option
RESTRICT NOT AUTHORIZED WRITE SECURITY LABEL then the insert fails and
an error is returned. If the CREATE SECURITY POLICY statement did not include
the option or if it instead included the OVERRIDE NOT AUTHORIZED WRITE
SECURITY LABEL option then the security label you provide is ignored and the
security label you hold for write access is used instead. No error or warning is
issued in this case.
Examples:
Table T1 is protected by a security policy (named P1) that was created without the
RESTRICT NOT AUTHORIZED WRITE SECURITY LABEL option. Table T1 has
two columns but no rows. The columns are LASTNAME and LABEL. The column
LABEL has a data type of DB2SECURITYLABEL.
Because no security label was included in the INSERT statement, Joe’s security
label for write access is inserted into the LABEL row.
Joe issues the following SQL statement, in which he explicitly provides the security
label to be inserted into the column LABEL:
INSERT INTO T1 VALUES ('Miller', SECLABEL_BY_NAME(’P1’, ’L1’) )
Because the security policy protecting T1 was created without the RESTRICT NOT
AUTHORIZED WRITE SECURITY LABEL option the security label that Joe holds
for writing is inserted instead. No error or message is returned.
If the security policy protecting the table had been created with the RESTRICT
NOT AUTHORIZED WRITE SECURITY LABEL option then the insert would have
failed and an error would have been returned.
Next Joe is granted an exemption to one of the LBAC rules. Assume that his new
LBAC credentials allow him to write to data that is protected with security labels
L1 and L2. The security label granted to Joe for write access does not change, it is
still L2.
Because of his new LBAC credentials Joe is able to write to data that is protected
by the security label L1. The insertion of L1 is therefore allowed. The table now
looks like this:
Related concepts:
v “Deleting or dropping of LBAC protected data” on page 569
v “How LBAC security labels are compared” on page 550
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security labels” on page 547
v “LBAC security policies” on page 540
v “Reading of LBAC protected data” on page 560
v “Updating of LBAC protected data” on page 565
When you try to update data in a protected column, your LBAC credentials are
compared to the security label protecting the column. The comparison made is for
write access. If write access is blocked then an error is returned and the statement
fails, otherwise the update continues.
The details of how your LBAC credentials are compared to a security label are
given in How LBAC security labels are compared.
Example:
This statement executes without error because it does not update any
protected columns. T1 now looks like this:
Assume Lhakpa is granted LBAC credentials and that allow the access
summarized in the following table. The details of what those credentials
are and what elements are in the security labels are not important for
this example.
This time the statement executes without error because Lhakpa's LBAC
credentials allow him to write to data protected by the security label that
is protecting the column DEPTNO. It does not matter that he is not able
to read from that same column. The data in T1 now looks like this:
Table 44.
DEPTNO PAYSCALE
Protected by Protected by
EMPNO LASTNAME L2 L3
1 Rjaibi 11 4
2 Miller 55 7
4 Bird 11 9
When you try to update a row your LBAC credentials for writing are compared to
the security label protecting the row. If write access is blocked the update fails and
an error is returned. If write access is not blocked then the update continues.
If the update explicitly sets the column that has a data type of
DB2SECURITYLABEL then your LBAC credentials are checked again. If the update
you are trying to perform would create a row that your current LBAC credentials
would not allow you to write to then an error is returned and the statement fails.
Otherwise the column is set to the provided security label.
Example:
Assume that user Jenni has LBAC credentials that allow her to read and
write data protected by the security labels L0 and L1 but not data
protected by any other security labels. The security label she holds for
both read and write is L0. The details of her full credentials and of what
elements are in the labels are not important for this example.
The rows protected by labels L2 and L3 are not included in the result set
because Jenni's LBAC credentials do not allow her to read those rows.
For Jenni it is as if those rows do not exist.
The statement executed without error but affected only the first row. The
second and third rows are not readable by Jenni so they are not selected
for update by the statement even though they meet the condition in the
WHERE clause.
Notice that the value of the LABEL column in the updated row has
changed even though that column was not explicitly set in the UPDATE
statement. The column was set to the security label that Jenni held for
writing.
Now Jenni is granted LBAC credentials that allow her to read data
protected by any security label. Her LBAC credentials for writing do not
change. She is still only able to write to data protected by L0 and L1.
This time the update fails because of the second and third rows. Jenni is
able to read those rows, so they are selected for update by the statement.
She is not, however, able to write to them because they are protected by
security labels L2 and L3. The update does not occur and an error is
returned.
If you try to update protected columns in a table with protected rows then your
LBAC credentials must allow writing to of all of the protected columns affected by
the update, otherwise the update fails and an error is returned. This is as described
in preceding section Updating protected columns. If you are allowed to update all
of the protected columns affected by the update you will still only be able to
update rows that your LBAC credentials allow you to both read from and write to.
This is as described in the preceding section Updating protected rows. The
handling of a column with a data type of DB2SECURITYLABEL is the same
whether the update affects protected columns or not.
Related concepts:
v “Deleting or dropping of LBAC protected data” on page 569
v “Inserting of LBAC protected data” on page 563
v “Reading of LBAC protected data” on page 560
If your LBAC credentials do not allow you to read a row then it is as if that row
does not exist for you so there is no way for you to delete it.
To delete a row that you are able to read, your LBAC credentials must also allow
you to write to the row. When you try to delete a row, your LBAC credentials for
writing are compared to the security label protecting the row. If the protecting
security label blocks write access by your LBAC credentials, the DELETE statement
fails, an error is returned, and no rows are deleted.
Example:
Assume that user Pat has LBAC credentials such that her access is as summarized
in this table:
The exact details of her LBAC credentials and of the security labels are
unimportant for this example.
The last row of T1 is not included in the results because Pat does not have read
access to that row. It is as if that row does not exist for Pat.
Pat does not have write access to the first or third row, both of which are protected
by L2. So even though she can read the rows she cannot delete them. The DELETE
statement fails and no rows are deleted.
This statement succeeds because Pat is able to write to the row with Miller in the
LASTNAME column. That is the only row selected by the statement. The row with
Fielding in the LASTNAME column is not selected because Pat's LBAC credentials
do not allow her to read that row. That row is never considered for the delete so
no error occurs.
To delete any row in a table that has protected columns you must have LBAC
credentials that allow you to write to all protected columns in the table. If there is
any row in the table that your LBAC credentials do not allow you to write to then
the delete will fail and an error will be returned.
If the table has both protected columns and protected rows then to delete a
particular row you must have LBAC credentials that allow you to write to every
protected column in the table and also to read from and write to the row that you
want to delete.
Example:
In protected table T1, the column DEPTNO is protected by the security label L2. T1
contains these rows:
DEPTNO
LASTNAME Protected by L2 LABEL
Rjaibi 55 L2
Miller 77 L1
Bird 55 L2
Fielding 77 L3
Assume that user Benny has LBAC credentials that allow him the access
summarized in this table:
The exact details of his LBAC credentials and of the security labels are
unimportant for this example.
The statement fails because Benny does not have write access to the column
DEPTNO.
Now Benny's LBAC credentials are changed so that he has access as summarized
in this table:
This time Benny has write access to the column DEPTNO so the delete continues.
The delete statement selects only the row that has a value of Miller in the
LASTNAME column. The row that has a value of Fielding in the LASTNAME
column is not selected because Benny's LBAC credentials do not allow him to read
that row. Because the row is not selected for deletion by the statement it does not
matter that Benny is unable to write to the row.
The one row selected is protected by the security label L1. Benny's LBAC
credentials allow him to write to data protected by L1 so the delete is successful.
DEPTNO
LASTNAME Protected by L2 LABEL
Rjaibi 55 L2
Bird 55 L2
Fielding 77 L3
You cannot drop a column that is protected by a security label unless your LBAC
credentials allow you to write to that column.
Your LBAC credentials do not prevent you from dropping entire tables or
databases that contain protected data. If you would normally have permission to
drop a table or a database you do not need any LBAC credentials to do so, even if
the database contains protected data.
Related concepts:
v “Inserting of LBAC protected data” on page 563
v “Reading of LBAC protected data” on page 560
v “Updating of LBAC protected data” on page 565
In a table that has protected rows every row must be protected by a security label.
There is no way to remove LBAC protection from individual rows.
Related concepts:
v “Label-based access control (LBAC) overview” on page 538
v “LBAC security labels” on page 547
v “LBAC security policies” on page 540
v “Protection of data using LBAC” on page 558
Related reference:
v “ALTER TABLE statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
A caching mechanism exists so that the client only needs to search the LDAP
directory server once. Once the information is retrieved from the LDAP directory
server, it is stored or cached on the local machine based on the values of the
dir_cache database manager configuration parameter and the DB2LDAPCACHE
Related concepts:
v “Security considerations in an LDAP environment” on page 589
v “Extending the LDAP directory schema with DB2 object classes and attributes”
on page 591
v “LDAP support and DB2 Connect” on page 589
v “Lightweight Directory Access Protocol (LDAP) directory service” on page 181
v “Security considerations for Active Directory” on page 590
v “Support for Active Directory” on page 575
v “DB2 registry and environment variables” in Performance Guide
Related tasks:
v “Attaching to a remote server in the LDAP environment” on page 583
v “Catalog a node alias for ATTACH” on page 581
v “Configuring DB2 in the IBM LDAP environment” on page 576
v “Configuring DB2 to use Active Directory” on page 576
v “Configuring the LDAP user for DB2 applications” on page 578
v “Creating an LDAP user” on page 577
v “Deregistering the database from the LDAP directory” on page 584
v “Deregistering the DB2 server” on page 582
v “Disabling LDAP support” on page 589
v “Enabling LDAP support after installation is complete” on page 588
v “Extending the directory schema for Active Directory” on page 591
v “Refreshing LDAP entries in local database and node directories” on page 584
v “Registering host databases in LDAP” on page 586
v “Registration of databases in the LDAP directory” on page 582
v “Registration of DB2 servers after installation” on page 578
v “Searching the LDAP servers” on page 585
v “Setting DB2 registry variables at the user level in the LDAP environment” on
page 587
v “Update the protocol information for the DB2 server” on page 580
Related reference:
v “DB2 objects in the Active Directory” on page 593
Note: When running on Windows operating systems, DB2 supports using either
the IBM LDAP client or the Microsoft LDAP client. To explicitly select the
IBM LDAP client, use the db2set command to set the
DB2LDAP_CLIENT_PROVIDER registry variable to “IBM”. The Microsoft
LDAP Client is included with the Windows operating system.
Related concepts:
v “Lightweight Directory Access Protocol (LDAP) overview” on page 573
v “Support for Active Directory” on page 575
Property pages for the ibm_db2Node and ibm_db2Database objects can be viewed
or modified using the Active Directory Users and Computer Management Console
(MMC) at a domain controller. To setup the property page, run the regsrv32
command to register the property pages for the DB2 objects as follows:
regsvr32 %DB2PATH%\bin\db2ads.dll
You can view the objects by using the Active Directory Users and Computer
Management Console (MMC) at a domain controller. To get to this administration
tool, follow Start—> Program—> Administration Tools—> Active Directory Users
and Computer.
Note: You must select Users, Groups, and Computers as containers from the View
menu to display the DB2 database objects under the computer objects.
Note: If DB2 database is not installed on the domain controller, you can still view
the property pages of DB2 database objects by copying the db2ads.dll file
from %DB2PATH%\bin and the resource DLL db2adsr.dll from
%DB2PATH%\msg\locale-name to a local directory on the domain
Related concepts:
v “Security considerations for Active Directory” on page 590
Related tasks:
v “Configuring DB2 to use Active Directory” on page 576
v “Extending the directory schema for Active Directory” on page 591
Related reference:
v “DB2 objects in the Active Directory” on page 593
In order to access Microsoft Active Directory, ensure that the following conditions
are met:
1. The machine that runs DB2 database must belong to a Windows 2000 or
Windows Server 2003 domain.
2. The Microsoft LDAP client is installed. The Microsoft LDAP client is part of the
Windows 2000, Windows XP, and Windows Server 2003 operating systems.
3. Enable the LDAP support. For Windows 2000, Windows XP, or Windows Server
2003, the LDAP support is enabled by the installation program.
4. Log on to a domain user account when running DB2 database to read
information from the Active Directory.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
v “Support for Active Directory” on page 575
Related tasks:
v “Configuring the LDAP user for DB2 applications” on page 578
Before you can use DB2 in the IBM LDAP environment, you must configure the
following on each machine:
v Enable the LDAP support. For Windows, LDAP support is enabled by the
installation program. The default LDAP client to use on all Windows operating
systems is Microsoft’s. If you want to use the IBM LDAP client, you must set the
DB2LDAP_CLIENT_PROVIDER registry variable to “IBM”, using the db2set
command.
v The LDAP server’s TCP/IP host name and port number. These values can be
entered during unattended installation using the DB2LDAPHOST response
keyword, or you can manually set them later by using the DB2SET command:
db2set DB2LDAPHOST=<hostname[:port]>
where baseDN is the name of the LDAP suffix that is defined at the LDAP server.
This LDAP suffix is used to contain DB2 objects.
v The LDAP user’s distinguished name (DN) and password. These are required
only if you plan to use LDAP to store DB2 user-specific information.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
Related tasks:
v “Configuring DB2 to use Active Directory” on page 576
v “Creating an LDAP user” on page 577
Related reference:
v “db2set - DB2 profile registry command” in Command Reference
DB2 supports setting DB2 registry variables and CLI configuration at the user
level. (This is not available on the Linux and UNIX platforms.) User level support
provides user-specific settings in a multi-user environment. An example is
Windows Terminal Server where each logged on user can customize his or her
own environment without interfering with the system environment or another
user’s environment.
When using the IBM Tivoli® directory, you must define an LDAP user before you
can store user-level information in LDAP. You can create an LDAP user by creating
an LDIF file to contain all attributes for the user object, then run the LDIF import
utility to import the object into the LDAP directory. The LDIF utility for the IBM
Tivoli Directory Server is “LDIF2DB”
A LDIF file containing the attributes for a person object appears similar to the
following:
File name: newuser.ldif
Related tasks:
v “Configuring DB2 in the IBM LDAP environment” on page 576
v “Configuring the LDAP user for DB2 applications” on page 578
When using the Microsoft LDAP client, the LDAP user is the same as the operating
system user account. However, when working with the IBM LDAP client and
before using the DB2 database manager, you must configure the LDAP user
distinguished name (DN) and password for the current logged on user. This can be
done using the db2ldcfg utility:
db2ldcfg -u <userDN> -w <password> —> set the user’s DN and password
-r —> clear the user’s DN and password
For example:
db2ldcfg -u "cn=Mary Burnnet,ou=DB2 Development,ou=Toronto,o=ibm,c=ca"
-w password
Related tasks:
v “Creating an LDAP user” on page 577
Related reference:
v “db2ldcfg - Configure LDAP environment command” in Command Reference
Each DB2 server instance must be registered in LDAP to publish the protocol
configuration information that is used by the client applications to connect to the
DB2 server instance. When registering an instance of the database server, you need
to specify a node name. The node name is used by client applications when they
connect or attach to the server. You can catalog another alias name for the LDAP
node by using the CATALOG LDAP NODE command.
The protocol clause specifies the communication protocol to use when connecting
to this database server.
When creating an instance for DB2 Enterprise Server Edition that includes multiple
physical machines, the REGISTER command must be invoked once for each
machine. Use the rah command to issue the REGISTER command on all machines.
Note: The same ldap_node_name cannot be used for each machine since the name
must be unique in LDAP. You will want to substitute the hostname of each
machine for the ldap_node_name in the REGISTER command. For example:
rah ">DB2 REGISTER DB2 SERVER IN LDAP AS <> PROTOCOL TCPIP"
The “<>” is substituted by the hostname on each machine where the rah
command is run. In the rare occurrence where there are multiple DB2
Enterprise Server Edition instances, the combination of the instance and host
index may be used as the node name in the rah command.
The REGISTER command can be issued for a remote DB2 server. To do so, you
must specify the remote computer name, instance name, and the protocol
configuration parameters when registering a remote server. The command can be
used as follows:
db2 register db2 server in ldap
as <ldap_node_name>
protocol tcpip
hostname <host_name>
svcename <tcpip_service_name>
remote <remote_computer_name>
instance <instance_name>
To register the DB2 server in LDAP from a client application, call the
db2LdapRegister API.
Related tasks:
v “Attaching to a remote server in the LDAP environment” on page 583
v “Catalog a node alias for ATTACH” on page 581
v “Deregistering the DB2 server” on page 582
v “Update the protocol information for the DB2 server” on page 580
Related reference:
v “CATALOG LDAP NODE command” in Command Reference
v “REGISTER command” in Command Reference
The DB2 server information in LDAP must be kept current. For example, changes
to the protocol configuration parameters or the server network address require an
update to LDAP.
To update the DB2 server in LDAP on the local machine, use the following
command:
db2 update ldap ...
To update a remote DB2 server protocol configuration parameters use the UPDATE
LDAP command with a node clause:
db2 update ldap
node <node_name>
hostname <host_name>
svcename <tcpip_service_name>
Related tasks:
v “Attaching to a remote server in the LDAP environment” on page 583
v “Catalog a node alias for ATTACH” on page 581
v “Registration of DB2 servers after installation” on page 578
Related reference:
v “UPDATE LDAP NODE command” in Command Reference
Prerequisites:
Procedure:
Once established, this alternate server information is returned to the client upon
connection.
Note:
Related concepts:
v “Automatic client reroute description and setup” on page 45
v “Client reroute setup when using JCC Type 4 drivers” on page 54
A node name for the DB2 server must be specified when registering the server in
LDAP. Applications use the node name to attach to the database server. If a
To uncatalog a LDAP node, use the UNCATALOG LDAP NODE COMMAND. The
command would appear similar to:
db2 uncatalog ldap node <ldap_node_name>
Related tasks:
v “Attaching to a remote server in the LDAP environment” on page 583
v “Registration of DB2 servers after installation” on page 578
Related reference:
v “CATALOG LDAP NODE command” in Command Reference
v “UNCATALOG LDAP NODE command” in Command Reference
Deregistration of an instance from LDAP also removes all the node, or alias, objects
and the database objects referring to the instance.
Deregistration of the DB2 server on either a local or a remote machine requires the
LDAP node name be specified for the server:
db2 deregister db2 server in ldap
node <node_name>
To deregister the DB2 server from LDAP from a client application, call the
db2LdapDeregister API.
When the DB2 server is deregistered, any LDAP node entry and LDAP database
entries referring to the same instance of the DB2 server are also uncataloged.
Related tasks:
v “Registration of DB2 servers after installation” on page 578
Related reference:
v “DEREGISTER command” in Command Reference
If the name already exists in the LDAP directory, the database is still created on the
local machine but a warning message is returned stating the naming conflict in the
Related tasks:
v “Deregistering the database from the LDAP directory” on page 584
v “Registration of DB2 servers after installation” on page 578
Related reference:
v “CATALOG LDAP DATABASE command” in Command Reference
In the LDAP environment, you can attach to a remote database server using the
LDAP node name on the ATTACH command:
db2 attach to <ldap_node_name>
When a client application attaches to a node or connects to a database for the first
time, since the node is not in the local node directory, the database manager
searches the LDAP directory for the target node entry. If the entry is found in the
LDAP directory, the protocol information of the remote server is retrieved. If you
connect to the database and if the entry is found in the LDAP directory, then the
database information is also retrieved. Using this information, the database
manager automatically catalogs a database entry and a node entry on the local
machine. The next time the client application attaches to the same node or
database, the information in the local database directory is used without having to
search the LDAP directory.
In more detail: A caching mechanism exists so that the client only searches the
LDAP server once. Once the information is retrieved, it is stored or cached on the
local machine based on the values of the dir_cache database manager configuration
parameter and the DB2LDAPCACHE registry variable.
v If DB2LDAPCACHE=NO and dir_cache=NO, then always read the information
from LDAP.
v If DB2LDAPCACHE=NO and dir_cache=YES, then read the information from
LDAP once and insert it into the DB2 cache.
v If DB2LDAPCACHE=YES or is not set, then read the information from LDAP
server once and cache it into the local database, node, and DCS directories.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
Related tasks:
v “Catalog a node alias for ATTACH” on page 581
v “Registration of databases in the LDAP directory” on page 582
v “Registration of DB2 servers after installation” on page 578
v “Update the protocol information for the DB2 server” on page 580
Related reference:
v “ATTACH command” in Command Reference
Related tasks:
v “Registration of databases in the LDAP directory” on page 582
Related reference:
v “UNCATALOG LDAP DATABASE command” in Command Reference
In more detail: A caching mechanism exists so that the client only searches the
LDAP server once. Once the information is retrieved, it is stored or cached on the
local machine based on the values of the dir_cache database manager configuration
parameter and the DB2LDAPCACHE registry variable.
v If DB2LDAPCACHE=NO and dir_cache=NO, then always read the information
from LDAP.
v If DB2LDAPCACHE=NO and dir_cache=YES, then read the information from
LDAP once and insert it into the DB2 cache.
Note: The caching of LDAP information is not applicable to user-level CLI or DB2
profile registry variables.
To refresh the database entries that refer to LDAP resources, use the following
command:
db2 refresh ldap database directory
To refresh the node entries on the local machine that refer to LDAP resources, use
the following command:
db2 refresh ldap node directory
As part of the refresh, all the LDAP entries that are saved in the local database and
node directories are removed. The next time that the application accesses the
database or node, it will read the information directly from LDAP and generate a
new entry in the local database or node directory.
To ensure the refresh is done in a timely way, you may want to:
v Schedule a refresh that is run periodically.
v Run the REFRESH command during system bootup.
v Use an available administration package to invoke the REFRESH command on
all client machines.
v Set DB2LDAPCACHE=“NO” to avoid LDAP information being cached in the
database, node, and DCS directories.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
Related reference:
v “dir_cache - Directory cache support configuration parameter” in Performance
Guide
v “REFRESH LDAP command” in Command Reference
The DB2 database system searches the current LDAP server (supported LDAP
servers are: IBM Tivoli Directory Server, Microsoft Active Directory, and Sun One
Directory Server) but in an environment where there are multiple LDAP servers,
you can define the scope of the search. For example, if the information is not
found in the current LDAP server, you can specify automatic search of all other
LDAP servers, or, alternatively, you can restrict the search scope to only the current
LDAP server, or to the local DB2 database catalog.
When you set the search scope, this sets the default search scope for the entire
enterprise. The search scope is controlled through the DB2 database profile registry
variable, DB2LDAP_SEARCH_SCOPE. To set the search scope value, use the “-gl”
option, which means “global in LDAP”, on the db2set command:
db2set -gl db2ldap_search_scope=<value>
For example, you may want to initially set the search scope to “global” after a new
database is created. This allows any DB2 client configured to use LDAP to search
all the LDAP servers to find the database. Once the entry has been recorded on
each machine after the first connect or attach for each client, if you have caching
enabled, the search scope can be changed to “local”. Once changed to “local”, each
client will not scan any LDAP servers.
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
Related tasks:
v “Declaring, showing, changing, resetting, and deleting registry and environment
variables” on page 68
When registering host databases in LDAP, there are two possible configurations:
v Direct connection to the host databases; or,
v Connection to the host database though a gateway.
In the first case, the user would register the host server in LDAP, then catalog the
host database in LDAP specifying the node name of the host server. In the second
case, the user would register the gateway server in LDAP, then catalog the host
database in LDAP specifying the node name of the gateway server.
If LDAP support is available at the DB2 Connect gateway, and the database is not
found at the gateway database directory, the DB2 database system will look up
LDAP and attempt to keep the found information.
As an example showing both cases, consider the following: Suppose there is a host
database called NIAGARA_FALLS. It can accept incoming connections using
APPN and TCP/IP. If the client can not connect directly to the host because it does
not have DB2 Connect, then it will connect using a gateway called “goto@niagara”.
After completing the registration and cataloging shown above, if you want to
connect to the host using TCPIP, you connect to “nftcpip”. If you want to connect
to the host using APPN, you connect to “nfappn”. If you do not have DB2 Connect
on your client workstation, the connection will go through the gateway using
TCPIP and from there, depending on whether you use “nftcpip” or “nfappn”, it
will connect to host using TCP/IP or APPN respectively.
In general then, you can manually configure host database information in LDAP so
that each client does not need to manually catalog the database and node locally
on each machine. The process follows:
1. Register the host database server in LDAP. You must specify the remote
computer name, instance name, and the node type for the host database server
in the REGISTER command using the REMOTE, INSTANCE, and NODETYPE
clauses respectively. The REMOTE clause can be set to either the host name or
the LU name of the host server machine. The INSTANCE clause can be set to
any character string that has eight characters or less. (For example, the instance
name can be set to “DB2”.) The NODE TYPE clause must be set to “DCS” to
indicate that this is a host database server.
2. Register the host database in LDAP using the CATALOG LDAP DATABASE
command. Any additional DRDA parameters can be specified by using the
PARMS clause. The database authentication type should be set to “DCS”.
Related reference:
v “CATALOG LDAP DATABASE command” in Command Reference
v “REGISTER command” in Command Reference
Under the LDAP environment, the DB2 profile registry variables can be set at the
user level which allows a user to customize their own DB2 environment. To set the
DB2 profile registry variables at the user level, use the -ul option:
db2set -ul <variable>=<value>
Related tasks:
v “Declaring, showing, changing, resetting, and deleting registry and environment
variables” on page 68
Related reference:
v “db2set - DB2 profile registry command” in Command Reference
To enable LDAP support at some point following the completion of the installation
process, use the following procedure on each machine:
v Install the LDAP support binary files. Run the setup program and select the
LDAP Directory Exploitation support from Custom install. The setup program
installs the binary files and sets the DB2 profile registry variable
DB2_ENABLE_LDAP to “YES”.
Note: For Windows, and UNIX platforms, you must explicitly enable LDAP by
setting the DB2_ENABLE_LDAP registry variable to “YES” using the
db2set command.
v (On UNIX platforms only) Declare the LDAP server’s TCP/IP host name and
(optional) port number using the following command:
db2set DB2LDAPHOST=<base_domain_name>[:port_number]
where baseDN is the name of the LDAP suffix that is defined at the LDAP server.
This LDAP suffix is used to contain DB2 objects.
v Register the current instance of the DB2 server in LDAP by using the REGISTER
LDAP AS command. For example:
db2 register ldap as <node-name> protocol tcpip
v Run the CATALOG LDAP DATABASE command if you have databases you
would like to register in LDAP. For example:
db2 catalog ldap database <dbname> as <alias_dbname>
v Enter the LDAP user’s distinguished name (DN) and password. These are
required only if you plan to use LDAP to store DB2 user-specific information.
Related concepts:
Related tasks:
v “Disabling LDAP support” on page 589
Related reference:
v “CATALOG LDAP DATABASE command” in Command Reference
v “db2set - DB2 profile registry command” in Command Reference
v “REGISTER command” in Command Reference
Related tasks:
v “Declaring, showing, changing, resetting, and deleting registry and environment
variables” on page 68
v “Enabling LDAP support after installation is complete” on page 588
Related reference:
v “DEREGISTER command” in Command Reference
Related concepts:
v “Lightweight Directory Access Protocol (LDAP) overview” on page 573
v “Security considerations in an LDAP environment” on page 589
Access control is inherited by default and can be applied at the container level.
When a new object is created, it inherits the same security attribute as the parent
object. An administration tool available for the LDAP server can be used to define
access control for the container object.
Note: The authorization check is always performed by the LDAP server and not
by DB2. The LDAP authorization check is not related to DB2 authorization.
An account or auth ID that has SYSADM authority may not have access to
the LDAP directory.
When running the LDAP commands or APIs, if the bind Distinguished Name
(bindDN) and password are not specified, DB2 binds to the LDAP server using the
default credentials which may not have sufficient authority to perform the
requested commands and an error will be returned.
You can explicitly specify the user’s bindDN and password using the USER and
PASSWORD clauses for the DB2 commands or APIs.
Related concepts:
v “Security considerations for Active Directory” on page 590
By default, objects under the computer object are readable by any authenticated
users and updateable by administrators (users that belong to the Administrators,
Domain Administrators, and Enterprise Administrators groups). To grant access for
a specific user or a group, use the Active Directory Users and Computer Management
Console (MMC) as follows:
1. Start the Active Directory Users and Computer administration tool
(Start—> Program—> Administration Tools—> Active Directory Users and
Computer)
2. Under View, select Advanced Features
3. Select the Computers container
4. Right click on the computer object that represents the server machine where
DB2 is installed and select Properties
5. Select the Security tab, then add the required access to the specified user or
group
The DB2 registry variables and CLI settings at the user level are maintained in the
DB2 property object under the user object. To set the DB2 registry variables or CLI
settings at the user level, a user needs to have sufficient access to create objects
under the User object.
Related concepts:
v “Security considerations in an LDAP environment” on page 589
Before DB2 can store the information into LDAP, the Directory Schema for the
LDAP server must include the object classes and attributes that DB2 uses. The
process of adding new object classes and attributes to the base schema is called
extending the Directory Schema.
Note: If you are using IBM Tivoli Directory Server, all the object classes and
attributes that are required by DB2 UDB Version 8.1 and earlier are included
in the base schema. In this case, you do not have to extend the base schema
with DB2 object classes and attributes. However, there are two new
attributes for DB2 UDB Version 8.2 that are not included in the base schema.
In this case, you have to extend the base schema with the two new DB2
database attributes.
Related concepts:
v “Extending the directory schema for IBM Tivoli Directory Server” on page 595
Related tasks:
v “Extending the directory schema for Active Directory” on page 591
You must extend the schema for Active Directory by running the DB2 Schema
Installation program, db2schex before the first installation of DB2 database on any
machine that is part of a Windows domain.
The db2schex program is included on the product CD-ROM. The location of this
program on the CD-ROM is under the db2 directory, the windows subdirectory, and
the utilities subdirectory. For example:
x:\db2\windows\utilities\
You need to run the db2schex.exe command that comes with the DB2 UDB Version
8.2 product to extend the directory schema.
If you have run the db2schex.exe command that came with the previous version of
the DB2 database management system, when you run this same command again
that come with DB2 UDB Version 8.2, it will add the following two optional
attributes to the ibm-db2Database class:
ibm-db2AltGwPtr
ibm-db2NodePtr
If you have not run the db2schex.exe command that came with the previous
version of the DB2 database management system on Windows, when you run this
same command that come with DB2 Version 8.2, it will add all the classes and
attributes for DB2 database system LDAP support.
Examples:
v To install the DB2 database schema:
The DB2 Schema Installation program for Active Directory carries out the
following tasks:
Notes:
1. Detects which server is the Schema Master
2. Binds to the Domain Controller that is the Schema Master
3. Ensures that the user has sufficient rights to add classes and attributes to the
schema
4. Ensures that the schema master is writable (that is, the safety interlock in the
registry is removed)
5. Creates all the new attributes
6. Creates all the new object classes
7. Detects errors, and if they occur, the program will roll back any changes to the
schema.
Related concepts:
v “Extending the LDAP directory schema with DB2 object classes and attributes”
on page 591
Related reference:
v “LDAP object classes and attributes used by DB2” on page 598
Within Netscape LDAP Server Version 4.12 or later, the Netscape Directory Server
allows application to extend the schema by adding attribute and object class
definitions into the following two files, slapd.user_oc.conf and
slapd.user_at.conf. These two files are located in the
directory.
Note: If you are using Sun One Directory Server 5.0, please refer to the topic about
extending the directory schema for the Sun One Directory Server.
The DB2 object classes must be added to the slapd.user_oc.conf file as follows:
############################################################################
#
# IBM DB2 Database
# Object Class Definitions
#
############################################################################
objectclass eProperty
oid 1.3.18.0.2.6.90
requires
objectClass
allows
cn,
propertyType,
binProperty,
binPropertyType,
cesProperty,
cesPropertyType,
cisProperty,
cisPropertyType
objectclass eApplicationSystem
objectclass DB2Node
oid 1.3.18.0.2.6.116
requires
objectClass,
db2nodeName
allows
db2nodeAlias,
host,
db2instanceName,
db2Type,
description,
protocolInformation
objectclass DB2Database
oid 1.3.18.0.2.6.117
requires
objectClass,
db2databaseName,
db2nodePtr
allows
db2databaseAlias,
description,
db2gwPtr,
db2additionalParameters,
db2authenticationLocation,
DCEPrincipalName,
db2databaseRelease,
db2ARLibrary
After adding the DB2 schema definition, the Directory Server must be restarted for
all changes to be active.
Related concepts:
v “Extending the directory schema for Sun One Directory Server” on page 596
Related reference:
v “LDAP object classes and attributes used by DB2” on page 598
dn: cn=schema
changetype: modify
add: attributetypes
attributetypes: (
1.3.18.0.2.4.3093
NAME ’db2altnodePtr’
DESC ’DN pointer to DB2 alternate node object’
SYNTAX 1.3.6.1.4.1.1466.115.121.1.12)
-
add: ibmattributetypes
ibmattributetypes: (
1.3.18.0.2.4.3093
DBNAME (’db2altnodePtr’ ’db2altnodePtr’)
ACCESS-CLASS NORMAL
LENGTH 1000)
dn: cn=schema
changetype: modify
replace: objectclasses
objectclasses: (
1.3.18.0.2.6.117
NAME ’DB2Database’
DESC ’DB2 database’
SUP cimSetting
MUST ( db2databaseName $ db2nodePtr )
MAY ( db2additionalParameters $ db2altgwPtr $ db2altnodePtr
$ db2ARLibrary $ db2authenticationLocation $ db2databaseAlias
$ db2databaseRelease $ db2gwPtr $ DCEPrincipalName ) )
After adding the DB2 schema definition, the Directory Server must be restarted for
all changes to be active.
Related concepts:
v “Extending the directory schema for Sun One Directory Server” on page 596
v “Extending the LDAP directory schema with DB2 object classes and attributes”
on page 591
Related tasks:
v “Extending the directory schema for Active Directory” on page 591
To have the Sun One Directory Server work in your environment, add the
60ibmdb2.ldif file to the following directory:
After adding the DB2 schema definition, the Directory Server must be restarted for
all changes to be active.
Related concepts:
v “Extending the directory schema for IBM Tivoli Directory Server” on page 595
v “Extending the LDAP directory schema with DB2 object classes and attributes”
on page 591
Related tasks:
v “Extending the directory schema for Active Directory” on page 591
cisPropertyType
cisProperty
cesPropertyType
cesProperty
binPropertyType
binProperty
Type structural
OID (object identifier) 1.3.18.0.2.6.90
GUID (Global Unique Identifier) b3afd69c-5c5b-11d3-b818-002035559151
db2instanceName
db2Type
protocolInformation/ServiceBindingInformation
Type structural
OID (object identifier) 1.3.18.0.2.6.116
GUID (Global Unique Identifier) b3afd65a-5c5b-11d3-b818-002035559151
Special Notes 1. The DB2Node class is derived from eSap object class
under IBM Tivoli Directory Server and from
ServiceConnectionPoint object class under Microsoft
Active Directory.
2. The host is used under the IBM Tivoli Directory
Server environment. The dNSHostName attribute is
used under Microsoft Active Directory.
3. The protocolInformation is only used under the IBM
Tivoli Directory Server environment. For Microsoft
Active Directory, the attribute
ServiceBindingInformation, inherited from the
ServiceConnectionPoint class, is used to contain the
protocol information.
Note: On a DB2 client for Windows, if the APPN information is not configured on
the local SNA stack; and, if the LAN adapter address and optional change
password LU are found in LDAP, then the DB2 client tries to use this
information to configure the SNA stack if it knows how to configure the
stack.
db2nodePtr
db2additionalParameter
db2ARLibrary
db2authenticationLocation
db2gwPtr
db2databaseRelease
DCEPrincipalName
db2altgwPtr
db2altnodePtr
Type structural
OID (object identifier) 1.3.18.0.2.6.117
GUID (Global Unique Identifier) b3afd659-5c5b-11d3-b818-002035559151
Related concepts:
v “Lightweight Directory Access Protocol (LDAP) overview” on page 573
CREATE_EXTERNAL_ROUTINE
may also be required.
User Analyst Defines the data requirements for an SELECT on the catalog views;
application program by examining CONNECT on one or more
the system catalog views databases.
Program End User Executes an application program EXECUTE on the package;
CONNECT on one or more
databases. See the note following this
table.
Related concepts:
v “Database administration authority (DBADM)” on page 509
v “Database authorities” on page 511
v “LOAD authority” on page 511
v “System administration authority (SYSADM)” on page 506
v “System control authority (SYSCTRL)” on page 507
v “System maintenance authority (SYSMAINT)” on page 508
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
The following views and table functions list information about privileges held by
users, identities of users granting privileges, and object ownership:
SYSCAT.DBAUTH Lists the database privileges
SYSCAT.TABAUTH Lists the table and view privileges
SYSCAT.COLAUTH Lists the column privileges
SYSCAT.PACKAGEAUTH Lists the package privileges
SYSCAT.INDEXAUTH Lists the index privileges
SYSCAT.SCHEMAAUTH Lists the schema privileges
SYSCAT.PASSTHRUAUTH Lists the server privilege
SYSCAT.ROUTINEAUTH Lists the routine (functions, methods, and stored
procedures) privileges
Privileges granted to users by the system will have SYSIBM as the grantor.
SYSADM, SYSMAINT SYSCTRL, and SYSMON are not listed in the system
catalog.
The CREATE and GRANT statements place privileges in the system catalog. Users
with SYSADM and DBADM authorities can grant and revoke SELECT privilege on
the system catalog views.
Related tasks:
v “Retrieving all names with DBADM authority” on page 611
v “Retrieving all privileges granted to users” on page 613
v “Retrieving authorization names with granted privileges” on page 610
v “Retrieving names authorized to access a table” on page 612
v “Securing the system catalog view” on page 613
Related reference:
v “SYSCAT.COLAUTH catalog view” in SQL Reference, Volume 1
v “SYSCAT.DBAUTH catalog view” in SQL Reference, Volume 1
v “SYSCAT.INDEXAUTH catalog view” in SQL Reference, Volume 1
v “SYSCAT.PACKAGEAUTH catalog view” in SQL Reference, Volume 1
v “SYSCAT.PASSTHRUAUTH catalog view” in SQL Reference, Volume 1
v “SYSCAT.ROUTINEAUTH catalog view” in SQL Reference, Volume 1
v “SYSCAT.SCHEMAAUTH catalog view” in SQL Reference, Volume 1
v “SYSCAT.TABAUTH catalog view” in SQL Reference, Volume 1
Starting with version 9.1 of the DB2 database manager, you can use the
PRIVILEGES and other administrative views to retrieve information about the
authorization names that have been granted privileges in a database. For example,
the following query retrieves all explicit privileges and the authorization IDs to
which they were granted, plus other information, from the PRIVILEGES
administrative view:
SELECT AUTHID, PRIVILEGE, OBJECTNAME, OBJECTSCHEMA, OBJECTTYPE FROM SYSIBMADM.PRIVILEGES
Prior to version 9.1, no single system catalog view contained information about all
privileges. For releases earlier than version 9.1, the following statement retrieves all
authorization names with privileges:
SELECT DISTINCT GRANTEE, GRANTEETYPE, ’DATABASE’ FROM SYSCAT.DBAUTH
UNION
SELECT DISTINCT GRANTEE, GRANTEETYPE, ’TABLE ’ FROM SYSCAT.TABAUTH
UNION
SELECT DISTINCT GRANTEE, GRANTEETYPE, ’PACKAGE ’ FROM SYSCAT.PACKAGEAUTH
UNION
SELECT DISTINCT GRANTEE, GRANTEETYPE, ’INDEX ’ FROM SYSCAT.INDEXAUTH
UNION
SELECT DISTINCT GRANTEE, GRANTEETYPE, ’COLUMN ’ FROM SYSCAT.COLAUTH
UNION
SELECT DISTINCT GRANTEE, GRANTEETYPE, ’SCHEMA ’ FROM SYSCAT.SCHEMAAUTH
UNION
SELECT DISTINCT GRANTEE, GRANTEETYPE, ’SERVER ’ FROM SYSCAT.PASSTHRUAUTH
ORDER BY GRANTEE, GRANTEETYPE, 3
Periodically, the list retrieved by this statement should be compared with lists of
user and group names defined in the system security facility. You can then identify
those authorization names that are no longer valid.
Note: If you are supporting remote database clients, it is possible that the
authorization name is defined at the remote client only and not on your
database server machine.
Related concepts:
v “Using the system catalog for security issues” on page 609
Related reference:
v “AUTH_LIST_GROUPS_FOR_AUTHID table function – Retrieve group
membership list for a given authorization ID” in Administrative SQL Routines and
Views
v “AUTHORIZATIONIDS administrative view – Retrieve authorization IDs and
types” in Administrative SQL Routines and Views
v “OBJECTOWNERS administrative view – Retrieve object ownership
information” in Administrative SQL Routines and Views
v “PRIVILEGES administrative view – Retrieve privilege information” in
Administrative SQL Routines and Views
The following statement retrieves all authorization names that have been directly
granted DBADM authority:
SELECT DISTINCT GRANTEE, GRANTEETYPE FROM SYSCAT.DBAUTH
WHERE DBADMAUTH = ’Y’
Note: This query will not return information about authorization names that
acquired DBADM authority implicitly by having SYSADM authority.
Starting with version 9.1 of the DB2 database manager, you can use the
PRIVILEGES and other administrative views to retrieve information about the
authorization names that have been granted privileges in a database. The following
statement retrieves all authorization names (and their types) that are directly
authorized to access the table EMPLOYEE with the qualifier JAMES:
SELECT DISTINCT AUTHID, AUTHIDTYPE FROM SYSIBMADM.PRIVILEGES
WHERE OBJECTNAME = ’EMPLOYEE’ AND OBJECTSCHEMA = ’JAMES’
For releases earlier than version 9.1, the following query retrieves the same
information:
SELECT DISTINCT GRANTEETYPE, GRANTEE FROM SYSCAT.TABAUTH
WHERE TABNAME = ’EMPLOYEE’
AND TABSCHEMA = ’JAMES’
UNION
SELECT DISTINCT GRANTEETYPE, GRANTEE FROM SYSCAT.COLAUTH
WHERE TABNAME = ’EMPLOYEE’
AND TABSCHEMA = ’JAMES’
To find out who can update the table EMPLOYEE with the qualifier JAMES, issue
the following statement:
SELECT DISTINCT GRANTEETYPE, GRANTEE FROM SYSCAT.TABAUTH
WHERE TABNAME = ’EMPLOYEE’ AND TABSCHEMA = ’JAMES’ AND
(CONTROLAUTH = ’Y’ OR
UPDATEAUTH IN (’G’,’Y’))
UNION
SELECT DISTINCT GRANTEETYPE, GRANTEE FROM SYSCAT.DBAUTH
WHERE DBADMAUTH = ’Y’
UNION
SELECT DISTINCT GRANTEETYPE, GRANTEE FROM SYSCAT.COLAUTH
WHERE TABNAME = ’EMPLOYEE’ AND TABSCHEMA = ’JAMES’ AND
PRIVTYPE = ’U’
This retrieves any authorization names with DBADM authority, as well as those
names to which CONTROL or UPDATE privileges have been directly granted.
However, it will not return the authorization names of users who only hold
SYSADM authority.
Remember that some of the authorization names may be groups, not just
individual users.
Related concepts:
v “Table and view privileges” on page 515
v “Using the system catalog for security issues” on page 609
Related reference:
v “PRIVILEGES administrative view – Retrieve privilege information” in
Administrative SQL Routines and Views
By making queries on the system catalog views, users can retrieve a list of the
privileges they hold and a list of the privileges they have granted to other users.
Starting with version 9.1 of the DB2 database manager, you can use the
PRIVILEGES and other administrative views to retrieve information about the
authorization names that have been granted privileges in a database. For example,
the following query retrieves all the privileges granted to the current session
authorization ID:
SELECT * FROM SYSIBMADM.PRIVILEGES
WHERE AUTHID = SESSION_USER AND AUTHIDTYPE = ’U’
For releases earlier than version 9.1, the following examples provide similar
information. For example, the following statement retrieves a list of the database
privileges that have been directly granted to the individual authorization name
JAMES:
SELECT * FROM SYSCAT.DBAUTH
WHERE GRANTEE = ’JAMES’ AND GRANTEETYPE = ’U’
The following statement retrieves a list of the table privileges that were directly
granted by the user JAMES:
SELECT * FROM SYSCAT.TABAUTH
WHERE GRANTOR = ’JAMES’
The following statement retrieves a list of the individual column privileges that
were directly granted by the user JAMES:
SELECT * FROM SYSCAT.COLAUTH
WHERE GRANTOR = ’JAMES’
Related concepts:
v “Database authorities” on page 511
v “Authorization, privileges, and object ownership” on page 501
v “Using the system catalog for security issues” on page 609
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
Related reference:
v “PRIVILEGES administrative view – Retrieve privilege information” in
Administrative SQL Routines and Views
If you have created a database using the RESTRICTIVE option, and you want to
check that the permissions granted to PUBLIC are limited, you can issue the
following query to verify which schemas PUBLIC can access:
SELECT DISTINCT OBJECTSCHEMA FROM SYSIBMADM.PRIVILEGES WHERE AUTHID=’PUBLIC’
OBJECTSCHEMA
------------
SYSFUN
SYSIBM
SYSPROC
To see what access PUBLIC still has to SYSIBM, you can issue the following query
to check what privileges are granted on SYSIBM. The results show that only
EXECUTE on certain procedures and functions is granted.
SELECT * FROM SYSIBMADM.PRIVILEGES WHERE OBJECTSCHEMA = ’SYSIBM’
AUTHID AUTHIDTYPE PRIVILEGE GRANTABLE OBJECTNAME OBJECTSCHEMA OBJECTTYPE
---------... ---------- ---------- --------- ---------------... ------------... ----------
PUBLIC G EXECUTE N SQL060207192129400 SYSPROC FUNCTION
PUBLIC G EXECUTE N SQL060207192129700 SYSPROC FUNCTION
PUBLIC G EXECUTE N SQL060207192129701 SYSPROC
...
PUBLIC G EXECUTE Y TABLES SYSIBM PROCEDURE
PUBLIC G EXECUTE Y TABLEPRIVILEGES SYSIBM PROCEDURE
PUBLIC G EXECUTE Y STATISTICS SYSIBM PROCEDURE
PUBLIC G EXECUTE Y SPECIALCOLUMNS SYSIBM PROCEDURE
PUBLIC G EXECUTE Y PROCEDURES SYSIBM PROCEDURE
PUBLIC G EXECUTE Y PROCEDURECOLS SYSIBM PROCEDURE
PUBLIC G EXECUTE Y PRIMARYKEYS SYSIBM PROCEDURE
PUBLIC G EXECUTE Y FOREIGNKEYS SYSIBM PROCEDURE
PUBLIC G EXECUTE Y COLUMNS SYSIBM PROCEDURE
PUBLIC G EXECUTE Y COLPRIVILEGES SYSIBM PROCEDURE
PUBLIC G EXECUTE Y UDTS SYSIBM PROCEDURE
PUBLIC G EXECUTE Y GETTYPEINFO SYSIBM PROCEDURE
PUBLIC G EXECUTE Y SQLCAMESSAGE SYSIBM PROCEDURE
PUBLIC G EXECUTE Y SQLCAMESSAGECCSID SYSIBM PROCEDURE
For releases earlier than version 9.1 of the DB2 database manager, during database
creation, SELECT privilege on the system catalog views is granted to PUBLIC. In
most cases, this does not present any security problems. For very sensitive data,
however, it may be inappropriate, as these tables describe every object in the
database. If this is the case, consider revoking the SELECT privilege from PUBLIC;
then grant the SELECT privilege as required to specific users. Granting and
revoking SELECT on the system catalog views is done in the same way as for any
view, but you must have either SYSADM or DBADM authority to do this.
At a minimum, if you don’t want any user to be able to know what objects other
users have access to, you should consider restricting access to the following catalog
and administrative views:
v SYSCAT.COLAUTH
v SYSCAT.DBAUTH
v SYSCAT.INDEXAUTH
v SYSCAT.PACKAGEAUTH
v SYSCAT.PASSTHRUAUTH
v SYSCAT.ROUTINEAUTH
v SYSCAT.SCHEMAAUTH
v SYSCAT.SECURITYLABELACCESS
v SYSCAT.SECURITYPOLICYEXEMPTIONS
v SYSCAT.SEQUENCEAUTH
v SYSCAT.SURROGATEAUTHIDS
v SYSCAT.TABAUTH
v SYSCAT.TBSPACEAUTH
v SYSCAT.XSROBJECTAUTH
v SYSIBMADM.AUTHORIZATIONIDS
v SYSIBMADM.OBJECTOWNERS
v SYSIBMADM.PRIVILEGES
You should also examine the columns for which statistics are gathered. Some of the
statistics recorded in the system catalog contain data values which could be
sensitive information in your environment. If these statistics contain sensitive data,
you may wish to revoke SELECT privilege from PUBLIC for the
SYSCAT.COLUMNS and SYSCAT.COLDIST catalog views.
If you wish to limit access to the system catalog views, you could define views to
let each authorization name retrieve information about its own privileges.
For example, the following view MYSELECTS includes the owner and name of
every table on which a user’s authorization name has been directly granted
SELECT privilege:
CREATE VIEW MYSELECTS AS
SELECT TABSCHEMA, TABNAME FROM SYSCAT.TABAUTH
WHERE GRANTEETYPE = ’U’
AND GRANTEE = USER
AND SELECTAUTH = ’Y’
The following statement makes the view available to every authorization name:
GRANT SELECT ON TABLE MYSELECTS TO PUBLIC
And finally, remember to revoke SELECT privilege on the view and base table by
issuing the following two statements:
REVOKE SELECT ON TABLE SYSCAT.TABAUTH FROM PUBLIC
REVOKE SELECT ON TABLE SYSIBM.SYSTABAUTH FROM PUBLIC
Related concepts:
v “Catalog statistics” in Performance Guide
v “Database authorities” on page 511
v “Using the system catalog for security issues” on page 609
Related tasks:
v “Granting privileges” on page 519
v “Revoking privileges” on page 521
Related reference:
v “CREATE DATABASE command” in Command Reference
v “PRIVILEGES administrative view – Retrieve privilege information” in
Administrative SQL Routines and Views
Security considerations
1. Gaining Access to Data through Indirect Means
The following are indirect means through which users can gain access to data
they might not be authorized for:
v Catalog views: The DB2 database system catalog views store metadata and
statistics about database objects. Users with SELECT access to the catalog
views can gain some knowledge about data that they might not be qualified
for. For better security, make sure that only qualified users have access to the
catalog views.
v Replication: When you replicate data, even the protected data is reproduced
at the target location. For better security, make sure that the target location is
at least as secure as the source location.
v Exception tables: When you specify an exception table while loading data
into a table, users with access to the exception table can gain information
that they might not be authorized for. For better security, only grant access to
the exception table to authorized users and drop the exception table as soon
as you are done with it.
v Backup table space or database: Users with the authority to run the backup
command can take a backup of a database or a table space, including any
protected data, and restore the data somewhere else. The backup can include
data that the user might not otherwise have access to.
The backup command can be executed by users with SYSADM, SYSCTRL, or
SYSMAINT authority.
v Set session authorization: In DB2 Universal Database V8 or earlier a user
with DBADM authority could use the SET SESSION AUTHORIZATION SQL
statement to set the session authorization ID to any database user. In DB2
V9.1 database systems a user must be explicitly authorized through the
GRANT SETSESSIONUSER statement before they can set the session
authorization ID.
When migrating an existing database to a DB2 V9.1 database system,
however, a user with existing explicit DBADM authority (for example.
granted in SYSCAT.DBAUTH) will keep the ability to set the session
authorization to any database user. This is allowed so that existing
applications will continue to work. Being able to set the session authorization
potentially allows access to all protected data. For more restrictive security,
you can override this setting by executing the REVOKE SETSESSIONUSER
SQL statement.
v Statement and deadlock monitoring: As part of the deadlock monitoring
activity of DB2 database management systems, values associated with
parameter markers are written to the monitoring output when the WITH
VALUES clause is specified. A user with access to the monitoring output can
gain access to information for which they might not be authorized.
v Traces: A trace can contain table data. A user with access to such a trace can
gain access to information that they might not be authorized for.
v Dump files: To help in debugging certain problems, DB2 database products
might generate memory dump files in the sqllib\db2dump directory. These
memory dump files might contain table data. If they do, users with access to
the files can gain access to information that they might not be authorized for.
For better security you should limit access to the sqllib\db2dump directory.
v db2dart: The db2dart tool examines a database and reports any architectural
errors that it finds. The tool can access table data and DB2 does not enforce
access control for that access. A user with the authority to run the db2dart
tool or with access to the db2dart output can gain access to information that
they might not be authorized for.
The following are the default privileges to certain system tables granted when a
database is created:
1. SYSIBM.SYSDBAUTH
v The database creator is granted the following privileges:
– DBADM
– CREATETAB
– CREATEROLE
– BINDADD
– CONNECT
– NOFENCE
– IMPLSCHEMA
– LOAD
– EXTERNALROUTINE
– QUIESCECONNECT
v The special group PUBLIC is granted the following privileges:
– CREATETAB
– BINDADD
– CONNECT
– IMPLSCHEMA
2. SYSIBM.SYSTABAUTH
v The special group PUBLIC is granted the following privileges:
– SELECT on all SYSCAT and SYSIBM tables
– SELECT and UPDATE on all SYSSTAT tables
3. SYSIBM.SYSROUTINEAUTH
v The special group PUBLIC is granted the following privileges:
– EXECUTE with GRANT on all procedures in schema
– SQLJ EXECUTE with GRANT on all functions and procedures in schema
SYSFUN
– EXECUTE with GRANT on all functions and procedures in schema
SYSPROC
– EXECUTE on all table functions in schema SYSIBM
– EXECUTE on all other procedures in schema SYSIBM
4. SYSIBM.SYSPACKAGEAUTH
v The database creator is granted the following privileges:
– CONTROL on all packages created in the NULLID schema
– BIND with GRANT on all packages created in the NULLID schema
– EXECUTE with GRANT on all packages created in the NULLID schema
Related concepts:
v “Explain snapshot” on page 456
v “Visual Explain” on page 463
There are existing firewall products that incorporate one of the firewall types listed
above. There are many other firewall products that incorporate some combination
of the above types.
Related concepts:
v “Application proxy firewalls” on page 620
v “Circuit level firewalls” on page 620
v “Screening router firewalls” on page 619
v “Stateful multi-layer inspection (SMLI) firewalls” on page 620
For all firewall solutions (except SOCKS), you need to ensure that all the ports
used by DB2 database are open for incoming and outgoing packets. DB2 database
uses port 523 for the DB2 Administration Server (DAS), which is used by the DB2
database tools. Determine the ports used by all your server instances by using the
services file to map the service name in the server database manager configuration
file to its port number.
The DB2 Connect product on a firewall machine can act as a proxy to the
destination server. Also, a DB2 database server on the firewall, acting as a hop
server to the final destination server, acts like an application proxy.
Related concepts:
v “Introduction to firewall support” on page 619
Related concepts:
v “Introduction to firewall support” on page 619
Related concepts:
v “Introduction to firewall support” on page 619
The DB2 database audit facility generates, and allows you to maintain, an audit
trail for a series of predefined database events. The records generated from this
facility are kept in an audit log file. The analysis of these records can reveal usage
patterns which would identify system misuse. Once identified, actions can be taken
to reduce or eliminate such system misuse.
The audit facility acts at an instance level, recording all instance level activities and
database level activities.
The audit log (db2audit.log) and the audit configuration file (db2audit.cfg) are
located in the instance’s security subdirectory. At the time you create an instance,
read/write permissions are set on these files, where possible, by the operating
system. By default, the permissions are read/write for the instance owner only. It
is recommended that you do not change these permissions.
Users of the audit facility administrator tool, db2audit, must have SYSADM
authority.
The audit facility must be stopped and started explicitly. When starting, the audit
facility uses existing audit configuration information. Since the audit facility is
independent of the DB2 database server, it will remain active even if the instance is
stopped. In fact, when the instance is stopped, an audit record may be generated
in the audit log.
Authorized users of the audit facility can control the following actions within the
audit facility:
v Start recording auditable events within the DB2 database instance.
v Stop recording auditable events within the DB2 database instance.
Note: Ensure that the audit facility has been turned on by issuing the db2audit
start command before using the audit utilities.
There are different categories of audit records that may be generated. In the
description of the categories of events available for auditing (below), you should
notice that following the name of each category is a one-word keyword used to
identify the category type. The categories of events available for auditing are:
v Audit (AUDIT). Generates records when audit settings are changed or when the
audit log is accessed.
v Authorization Checking (CHECKING). Generates records during authorization
checking of attempts to access or manipulate DB2 database objects or functions.
v Object Maintenance (OBJMAINT). Generates records when creating or dropping
data objects.
v Security Maintenance (SECMAINT). Generates records when granting or
revoking: object or database privileges, or DBADM authority. Records are also
generated when the database manager security configuration parameters
SYSADM_GROUP, SYSCTRL_GROUP, or SYSMAINT_GROUP are modified.
v System Administration (SYSADMIN). Generates records when operations
requiring SYSADM, SYSMAINT, or SYSCTRL authority are performed.
v User Validation (VALIDATE). Generates records when authenticating users or
retrieving system security information.
v Operation Context (CONTEXT). Generates records to show the operation context
when a database operation is performed. This category allows for better
interpretation of the audit log file. When used with the log’s event correlator
field, a group of events can be associated back to a single database operation.
For example, a query statement for dynamic queries, a package identifier for
static queries, or an indicator of the type of operation being performed, such as
CONNECT, can provide needed context when analyzing audit results.
Note: The SQL or XQuery statement providing the operation context might be
very long and is completely shown within the CONTEXT record. This can
make the CONTEXT record very large.
v You can audit failures, successes, or both.
Any operation on the database may generate several records. The actual number of
records generated and moved to the audit log depends on the number of
categories of events to be recorded as specified by the audit facility configuration.
It also depends on whether successes, failures, or both, are audited. For this reason,
it is important to be selective of the events to audit.
Related concepts:
v “Audit facility behavior” on page 623
Related tasks:
v “Controlling DB2 database audit facility activities” on page 655
Related reference:
v “Audit facility messages” on page 636
v “Audit facility usage” on page 624
The timing of the writing of audit records to the audit log can have a significant
impact on the performance of databases in the instance. The writing of the audit
records can take place synchronously or asynchronously with the occurrence of the
events causing the generation of those records. The value of the audit_buf_sz
database manager configuration parameter determines when the writing of audit
records is done.
If the value of this parameter is zero (0), the writing is done synchronously. The
event generating the audit record will wait until the record is written to disk. The
wait associated with each record causes the performance of DB2 database to
decrease.
If the value of audit_buf_sz is greater than zero, the record writing is done
asynchronously. The value of the audit_buf_sz when it is greater than zero is the
number of 4 KB pages used to create an internal buffer. The internal buffer is used
to keep a number of audit records before writing a group of them out to disk. The
statement generating the audit record as a result of an audit event will not wait
until the record is written to disk, and can continue its operation.
The setting of the ERRORTYPE audit facility parameter controls how errors are
managed between DB2 database and the audit facility. When the audit facility is
active, and the setting of the ERRORTYPE audit facility parameter is AUDIT, then
the audit facility is treated in the same way as any other part of DB2 database. An
audit record must be written (to disk in synchronous mode; or to the audit buffer
Depending on the API or query statement and the audit settings for the DB2
database instance, none, one, or several audit records may be generated for a
particular event. For example, an SQL UPDATE statement with a SELECT
subquery may result in one audit record containing the results of the authorization
check for UPDATE privilege on a table and another record containing the results of
the authorization check for SELECT privilege on a table.
For dynamic data manipulation language (DML) statements, audit records are
generated for all authorization checking at the time that the statement is prepared.
Reuse of those statements by the same user will not be audited again since no
authorization checking takes place at that time. However, if a change has been
made to one of the catalog tables containing privilege information, then in the next
unit of work, the statement privileges for the cached dynamic SQL or XQuery
statements are checked again and one or more new audit records created.
For a package containing only static DML statements, the only auditable event that
could generate an audit record is the authorization check to see if a user has the
privilege to execute that package. The authorization checking and possible audit
record creation required for the static SQL or XQuery statements in the package is
carried out at the time the package is precompiled or bound. The execution of the
static SQL or XQuery statements within the package is not auditable. When a
package is bound again either explicitly by the user, or implicitly by the system,
audit records are generated for the authorization checks required by the static SQL
or XQuery statements.
Note: When executing DDL, the section number recorded for all events (except the
context events) in the audit record will be zero (0) no matter what the actual
section number of the statement might have been.
Related concepts:
v “Introduction to the DB2 database audit facility” on page 621
Related reference:
v “audit_buf_sz - Audit buffer size configuration parameter” in Performance Guide
v “Audit facility usage” on page 624
Audit Configuration:
scope all status both
, success
failure
audit
checking
objmaint
secmaint
sysadmin
validate
context
errortype audit
normal
Audit Extraction:
file output-file
delasc ,
delimiter load-delimiter
category audit
checking
objmaint
secmaint
sysadmin
validate
context
database database-name status success
failure
Note: Please notice that the default SCOPE is all categories except
CONTEXT and may result in records being generated rapidly. In
conjunction with the mode (synchronous or asynchronous), the
selection of the categories may result in a significant performance
reduction and significantly increased disk requirements.
v STATUS. This action specifies whether only successful or failing events,
or both successful and failing events, should be logged.
which the audit facility will use as a temporary space when pruning the
audit log. This temporary space allows for the pruning of the audit log
when the disk it resides on is full and does not have enough space to
allow for a pruning operation.
start This parameter causes the audit facility to begin auditing events based on
the contents of the db2audit.cfg file. In a partitioned DB2 database
instance, auditing will begin on all database partitions when this clause is
specified. If the “audit” category of events has been specified for auditing,
then an audit record will be logged when the audit facility is started.
stop This parameter causes the audit facility to stop auditing events. In a
partitioned DB2 database instance, auditing will be stopped on all database
partitions when this clause is specified. If the “audit” category of events
has been specified for auditing, then an audit record will be logged when
the audit facility is stopped.
Related concepts:
v “Audit facility tips and techniques” on page 654
v “Introduction to the DB2 database audit facility” on page 621
Related reference:
v “db2audit - Audit facility administrator tool command” in Command Reference
Procedure:
Related concepts:
v “Audit facility behavior” on page 623
v “Audit facility tips and techniques” on page 654
Related tasks:
v “Creating tables to hold the DB2 audit data” on page 628
v “Creating DB2 audit data files” on page 631
v “Loading DB2 audit data into tables” on page 632
v “Selecting DB2 audit data from tables” on page 635
Related reference:
v “Audit facility usage” on page 624
Prerequisites:
v See the CREATE SCHEMA statement for the authorities and privileges that you
require to create a schema.
v See the CREATE TABLE statement for the authorities and privileges that you
require to create a table.
v Decide which table space you want to use to hold the tables. (This topic does
not describe how to create table spaces.)
Procedure:
If you do not want to use all of the data that is contained in the files, you can omit
columns from the table definitions, or bypass creating tables, as required. If you
omit columns from the table definitions, you must modify the commands that you
use to load data into these tables.
1. Issue the db2 command to open a DB2 command window.
2. Optional. Create a schema to hold the tables. Issue the following command.
For this example, the schema is called AUDIT
CREATE SCHEMA AUDIT
3. Optional. If you created the AUDIT schema, switch to the schema before
creating any tables. Issue the following command:
SET CURRENT SCHEMA = ’AUDIT’
4. To create the table that will contain records from the audit.del file, issue the
following SQL statement:
CREATE TABLE AUDIT (TIMESTAMP CHAR(26),
CATEGORY CHAR(8),
EVENT VARCHAR(32),
CORRELATOR INTEGER,
STATUS INTEGER,
USERID VARCHAR(1024),
AUTHID VARCHAR(128))
5. To create the table that will contain records from the checking.del file, issue
the following SQL statement:
CREATE TABLE CHECKING (TIMESTAMP CHAR(26),
CATEGORY CHAR(8),
EVENT VARCHAR(32),
CORRELATOR INTEGER,
STATUS INTEGER,
DATABASE CHAR(8),
USERID VARCHAR(1024),
AUTHID VARCHAR(128),
NODENUM SMALLINT,
COORDNUM SMALLINT,
APPID VARCHAR(255),
APPNAME VARCHAR(1024),
PKGSCHEMA VARCHAR(128),
PKGNAME VARCHAR(128),
PKGSECNUM SMALLINT,
OBJSCHEMA VARCHAR(128),
OBJNAME VARCHAR(128),
OBJTYPE VARCHAR(32),
ACCESSAPP CHAR(18),
ACCESSATT CHAR(18),
PKGVER VARCHAR(64),
CHKAUTHID VARCHAR(128))
6. To create the table that will contain records from the objmaint.del file, issue
the following SQL statement:
CREATE TABLE OBJMAINT (TIMESTAMP CHAR(26),
CATEGORY CHAR(8),
EVENT VARCHAR(32),
CORRELATOR INTEGER,
STATUS INTEGER,
DATABASE CHAR(8),
USERID VARCHAR(1024),
AUTHID VARCHAR(128),
NODENUM SMALLINT,
COORDNUM SMALLINT,
Related tasks:
v “Creating DB2 audit data files” on page 631
v “Setting a schema” on page 169
Related reference:
v “CREATE SCHEMA statement” in SQL Reference, Volume 2
v “CREATE TABLE statement” in SQL Reference, Volume 2
Prerequisites:
Procedure:
Related tasks:
v “Loading DB2 audit data into tables” on page 632
Related reference:
v “Audit record layout for SYSADMIN events” on page 650
v “Audit record layout for VALIDATE events” on page 651
v “db2audit - Audit facility administrator tool command” in Command Reference
v “Audit facility usage” on page 624
v “Audit record layout for AUDIT events” on page 637
v “Audit record layout for CHECKING events” on page 638
v “Audit record layout for CONTEXT events” on page 652
v “Audit record layout for OBJMAINT events” on page 643
v “Audit record layout for SECMAINT events” on page 645
See the topic on the privileges, authorities, and authorizations required to use the
load utility for more information.
Procedure:
Use the load utility to load the data into the tables. Issue a separate load command
for each table. If you omitted one or more columns from the table definitions, you
must modify the version of the LOAD command that you use to successfully load
the data. Also, if you specified a delimiter character other than the default (0xff)
when you extracted the audit data, you must also modify the version of the LOAD
command that you use (see the topic ″File type modifiers for load ″ for more
information).
1. Issue the db2 command to open a DB2 command window.
2. To load the AUDIT table, issue the following command:
LOAD FROM audit.del OF del MODIFIED BY CHARDEL0xff INSERT INTO schema.AUDIT
Note: When specifying the file name, use the fully qualified path name. For
example, if you have DB2 database installed on the C: drive of a
Windows-based computer, you would specify C:\Program
Files\IBM\SQLLIB\instance\security\audit.del as the fully qualified
file name for the audit.del file.
After loading the AUDIT table, issue the following DELETE statement to
ensure that you do not load duplicate rows into the table the next time you
load it. When you extracted the audit records from the db2audit.log file, all
records in the file were written to the .del files. Likely, the .del files
contained records that were written after the hour to which the audit log was
subsequently pruned (because the db2audit prune command only prunes
records to a specified hour). The next time you extract the audit records, the
new .del files will contain records that were previously extracted, but not
deleted by the db2audit prune command (because they were written after the
hour specified for the prune operation). Deleting rows from the table to the
same hour to which the db2audit.log file was pruned ensures that the table
does not contain duplicate rows, and that no audit records are lost.
DELETE FROM schema.AUDIT WHERE TIMESTAMP > TIMESTAMP(’YYYYMMDDHH0000’)
Where YYYYMMDDHH is the value that you specified when you pruned the
db2audit.log file. Because the DB2 audit facility continues to write audit
records to the db2audit.log file after it is pruned, you must specify 0000 for
the minutes and seconds to ensure that audit records that were written after
the db2audit.log file was pruned are not deleted from the table.
3. To load the CHECKING table, issue the following command:
LOAD FROM checking.del OF del MODIFIED BY CHARDEL0xff INSERT INTO
schema.CHECKING
After loading the CHECKING table, issue the following SQL statement to
ensure that you do not load duplicate rows into the table the next time you
load it:
DELETE FROM schema.CHECKING WHERE TIMESTAMP > TIMESTAMP(’YYYYMMDDHH0000’)
Where YYYYMMDDHH is the value that you specified when you pruned the
log file.
4. To load the OBJMAINT table, issue the following command:
Where YYYYMMDDHH is the value that you specified when you pruned the
log file.
5. To load the SECMAINT table, issue the following command:
LOAD FROM secmaint.del OF del MODIFIED BY CHARDEL0xff INSERT INTO
schema.SECMAINT
After loading the SECMAINT table, issue the following SQL statement to
ensure that you do not load duplicate rows into the table the next time you
load it:
DELETE FROM schema.SECMAINT WHERE TIMESTAMP > TIMESTAMP(’YYYYMMDDHH0000’)
Where YYYYMMDDHH is the value that you specified when you pruned the
log file.
6. To load the SYSADMIN table, issue the following command:
LOAD FROM sysadmin.del OF del MODIFIED BY CHARDEL0xff INSERT INTO
schema.SYSADMIN
After loading the SYSADMIN table, issue the following SQL statement to
ensure that you do not load duplicate rows into the table the next time you
load it:
DELETE FROM schema.SYSADMIN WHERE TIMESTAMP > TIMESTAMP(’YYYYMMDDHH0000’)
Where YYYYMMDDHH is the value that you specified when you pruned the
log file.
7. To load the VALIDATE table, issue the following command:
LOAD FROM validate.del OF del MODIFIED BY CHARDEL0xff INSERT INTO
schema.VALIDATE
After loading the VALIDATE table, issue the following SQL statement to
ensure that you do not load duplicate rows into the table the next time you
load it:
DELETE FROM schema.VALIDATE WHERE TIMESTAMP > TIMESTAMP(’YYYYMMDDHH0000’)
Where YYYYMMDDHH is the value that you specified when you pruned the
log file.
8. To load the CONTEXT table, issue the following command:
LOAD FROM context.del OF del MODIFIED BY CHARDEL0xff INSERT INTO
schema.CONTEXT
After loading the CONTEXT table, issue the following SQL statement to
ensure that you do not load duplicate rows into the table the next time you
load it:
DELETE FROM schema.CONTEXT WHERE TIMESTAMP > TIMESTAMP(’YYYYMMDDHH0000’)
Where YYYYMMDDHH is the value that you specified when you pruned the
log file.
9. After you finish loading the data into the tables, delete the .del files from the
security subdirectory of the sqllib directory.
10. When you have loaded the audit data into the tables, you are ready to select
data from these tablesselect data from these tables.
Related concepts:
v “Privileges, authorities, and authorizations required to use Load” in Data
Movement Utilities Guide and Reference
v “Load considerations for MDC tables” in Administration Guide: Planning
v “LOAD authority” on page 511
Related tasks:
v “Selecting DB2 audit data from tables” on page 635
v “Enabling parallelism for loading data” on page 10
v “Loading data into a table using the Load wizard” on page 237
Related reference:
v “File type modifiers for the load utility” in Command Reference
Prerequisites:
See the topic on the SELECT statement for information about the authorities and
privileges required to select data from a table.
Procedure:
The select that you perform should reflect the type of analysis that you want to do
on the data. For example, you can select records according to an authorization ID
(authid) to determine the type of activities that this authorization ID has been
performing:
SELECT * FROM AUDIT.CHECKING WHERE AUTHID = authorization ID
Where authorization ID is the user ID for which you want to analyze the data.
Related reference:
v “Subselect” in SQL Reference, Volume 1
v “SELECT statement” in SQL Reference, Volume 2
v “Audit record layout for AUDIT events” on page 637
v “Audit record layout for CHECKING events” on page 638
v “Audit record layout for CONTEXT events” on page 652
v “Audit record layout for OBJMAINT events” on page 643
v “Audit record layout for SECMAINT events” on page 645
v “Audit record layout for SYSADMIN events” on page 650
v “Audit record layout for VALIDATE events” on page 651
v “List of possible CHECKING access approval reasons” on page 640
v “List of possible CHECKING access attempted types” on page 641
v “List of possible CONTEXT audit events” on page 653
v “List of possible SECMAINT privileges or authorities” on page 647
v “List of possible SYSADMIN audit events” on page 650
SQL1322N An error occurred when writing to the SQL1323N An error occurred when accessing the
audit log file. audit configuration file.
Explanation: The DB2 database audit facility Explanation: The audit configuration file
encountered an error when invoked to record an audit (db2audit.cfg) could not be opened, or was invalid.
event to the audit log file. There is no space on the file Possible reasons for this error are that the db2audit.cfg
system where the audit log resides. file either does not exist, or has been damaged.
User response: The system administrator should free User response: Take one of the following actions:
up space on this file system or prune the audit log to v Restore from a saved version of the file.
reduce its size.
v Reset the audit facility configuration file by issuing
When more space is available, use db2audit to flush db2audit reset
out any data in memory, and to reset the auditor to a
ready state. Ensure that appropriate extracts have sqlcode: -1323
occurred, or a copy of the log has been made before
pruning the log, as deleted records are not recoverable.
sqlstate: 57019
sqlcode: -1322
sqlstate: 50830
Related concepts:
v “Introduction to the DB2 database audit facility” on page 621
Related reference:
v “Audit record layout for AUDIT events” on page 637
v “Audit record layout for CHECKING events” on page 638
v “Audit record layout for CONTEXT events” on page 652
v “Audit record layout for OBJMAINT events” on page 643
v “Audit record layout for SECMAINT events” on page 645
v “Audit record layout for SYSADMIN events” on page 650
v “Audit record layout for VALIDATE events” on page 651
AUDIT
Audit Event VARCHAR(32) Specific Audit Event.
Related concepts:
v “Audit facility record layouts (introduction)” on page 636
CHECKING
Audit Event VARCHAR(32) Specific Audit Event.
Related concepts:
v “Audit facility record layouts (introduction)” on page 636
Related reference:
v “Audit record object types” on page 639
v “List of possible CHECKING access approval reasons” on page 640
v “List of possible CHECKING access attempted types” on page 641
Related reference:
v “Audit record layout for CHECKING events” on page 638
v “Audit record layout for OBJMAINT events” on page 643
v “Audit record layout for SECMAINT events” on page 645
Related reference:
v “Audit record layout for CHECKING events” on page 638
v “List of possible CHECKING access attempted types” on page 641
Related reference:
v “Audit record layout for CHECKING events” on page 638
v “List of possible CHECKING access approval reasons” on page 640
OBJMAINT
Audit Event VARCHAR(32) Specific Audit Event.
Related concepts:
v “Introduction to the DB2 database audit facility” on page 621
Related reference:
v “Audit record object types” on page 639
SECMAINT
Audit Event VARCHAR(32) Specific Audit Event.
If the object type field is ACCESS_RULE then this field contains the
security policy name associated with the rule. The name of the rule
is stored in the field Object Name.
If the object type field is ACCESS_RULE then this field contains the
name of the rule. The security policy name associated with the rule
is stored in the field Object Schema.
Possible values:
v READ
v WRITE
v ALL
Related concepts:
v “Audit facility record layouts (introduction)” on page 636
Related reference:
v “Audit record object types” on page 639
v “List of possible SECMAINT privileges or authorities” on page 647
Related reference:
v “Audit record layout for SECMAINT events” on page 645
SYSADMIN
Audit Event VARCHAR(32) Specific Audit Event.
Possible values include: Those shown in the list following this table.
Event Correlator INTEGER Correlation identifier for the operation being audited. Can be used
to identify what audit records are associated with a single event.
Event Status INTEGER Status of audit event, represented by an SQLCODE where
Related concepts:
v “Audit facility record layouts (introduction)” on page 636
Related reference:
v “List of possible SYSADMIN audit events” on page 650
Related reference:
v “Audit record layout for SYSADMIN events” on page 650
VALIDATE
Audit Event VARCHAR(32) Specific Audit Event.
Related concepts:
v “Audit facility record layouts (introduction)” on page 636
CONTEXT
Audit Event VARCHAR(32) Specific Audit Event.
Possible values include: Those shown in the list following this table.
Event Correlator INTEGER Correlation identifier for the operation being audited. Can be used
to identify what audit records are associated with a single event.
Database Name CHAR(8) Name of the database for which the event was generated. Blank if
this was an instance level audit event.
User ID VARCHAR(1024) User ID at time of audit event.
Authorization ID VARCHAR(128) Authorization ID at time of audit event.
Origin Node Number SMALLINT Node number at which the audit event occurred.
Coordinator Node SMALLINT Node number of the coordinator node.
Number
Application ID VARCHAR (255) Application ID in use at the time the audit event occurred.
Application Name VARCHAR (1024) Application name in use at the time the audit event occurred.
Package Schema VARCHAR (128) Schema of the package in use at the time of the audit event.
Package Name VARCHAR (128) Name of package in use at the time the audit event occurred.
Package Section SMALLINT Section number in package being used at the time the audit event
Number occurred.
Statement Text CLOB (2M) Text of the SQL or XQuery statement, if applicable. Null if no SQL
(statement) or XQuery statement text is available.
Package Version VARCHAR (64) Version of the package in use at the time the audit event occurred.
Related concepts:
v “Audit facility record layouts (introduction)” on page 636
Related reference:
v “List of possible CONTEXT audit events” on page 653
Related reference:
v “Audit record layout for CONTEXT events” on page 652
When extracting audit records in a delimited ASCII format suitable for loading into
a DB2 database relational table, you should be clear regarding the delimiter used
within the statement text field. This can be done when extracting the delimited
ASCII file and is done using:
db2audit extract delasc delimiter <load delimiter>
The load delimiter can be a single character (such as ") or a four-byte string
representing a hexadecimal value (such as “0xff”). Examples of valid commands
are:
db2audit extract delasc
db2audit extract delasc delimiter !
db2audit extract delasc delimiter 0xff
If you have used anything other than the default load delimiter (“″”) as the
delimiter when extracting, you should use the MODIFIED BY option on the LOAD
command. A partial example of the LOAD command with “0xff” used as the
delimiter follows:
db2 load from context.del of del modified by chardel0xff replace into ...
This will override the default load character string delimiter which is “0xff”.
Related concepts:
v “Audit facility record layouts (introduction)” on page 636
Related reference:
v “Audit facility usage” on page 624
As part of our discussion on the control of the audit facility activities, we will use
a simple scenario: A user, newton, runs an application called testapp that connects
and creates a table. This same application is used in each of the examples
discussed below.
After beginning the audit facility with this configuration (using “db2audit start”),
and then running the testapp application, the following records are generated and
placed in the audit log. By extracting the audit records from the log, you will see
the following records generated for the two actions carried out by the application:
Action Type of Record Created
CONNECT
timestamp=1998-06-24-08.42.10.555345;category=CONTEXT;
audit event=CONNECT;event correlator=2;database=FOO;
application id=*LOCAL.newton.980624124210;
application name=testapp;
timestamp=1998-06-24-08.42.10.944374;category=VALIDATE;
audit event=AUTHENTICATION;event correlator=2;event status=0;
database=FOO;userid=boss;authid=BOSS;execution id=newton;
application id=*LOCAL.newton.980624124210;application name=testapp;
auth type=SERVER;
timestamp=1998-06-24-08.42.11.527490;category=VALIDATE;
audit event=CHECK_GROUP_MEMBERSHIP;event correlator=2;
event status=-1092;database=FOO;userid=boss;authid=BOSS;
execution id=newton;application id=*LOCAL.newton.980624124210;
application name=testapp;auth type=SERVER;
timestamp=1998-06-24-08.42.11.561187;category=VALIDATE;
audit event=CHECK_GROUP_MEMBERSHIP;event correlator=2;
event status=-1092;database=FOO;userid=boss;authid=BOSS;
execution id=newton;application id=*LOCAL.newton.980624124210;
application name=testapp;auth type=SERVER;
timestamp=1998-06-24-08.42.11.594620;category=VALIDATE;
audit event=CHECK_GROUP_MEMBERSHIP;event correlator=2;
event status=-1092;database=FOO;userid=boss;authid=BOSS;
execution id=newton;application id=*LOCAL.newton.980624124210;
application name=testapp;auth type=SERVER;
timestamp=1998-06-24-08.42.11.622984;category=CHECKING;
audit event=CHECKING_OBJECT;event correlator=2;event status=0;
database=FOO;userid=boss;authid=BOSS;
application id=*LOCAL.newton.980624124210;application name=testapp;
object name=FOO;object type=DATABASE;access approval reason=DATABASE;
access attempted=CONNECT;
timestamp=1998-06-24-08.42.11.801554;category=CONTEXT;
audit event=COMMIT;event correlator=2;database=FOO;userid=boss;
authid=BOSS;application id=*LOCAL.newton.980624124210;
application name=testapp;
timestamp=1998-06-24-08.42.41.450975;category=CHECKING;
audit event=CHECKING_OBJECT;event correlator=2;event status=0;
database=FOO;userid=boss;authid=BOSS;
application id=*LOCAL.newton.980624124210;application name=testapp;
package schema=NULLID;package name=SQLC28A1;object schema=NULLID;
object name=SQLC28A1;object type=PACKAGE;
access approval reason=OBJECT;access attempted=EXECUTE;
CREATE TABLE
timestamp=1998-06-24-08.42.41.539692;category=CHECKING;
audit event=CHECKING_OBJECT;event correlator=3;event status=0;
database=FOO;userid=boss;authid=BOSS;
application id=*LOCAL.newton.980624124210;application name=testapp;
package schema=NULLID;package name=SQLC28A1;package section=0;
object schema=BOSS;object name=AUDIT;object type=TABLE;
access approval reason=DATABASE;access attempted=CREATE;
timestamp=1998-06-24-08.42.41.570876;category=CHECKING;
audit event=CHECKING_OBJECT;event correlator=3;event status=0;
database=FOO;userid=boss;authid=BOSS;
application id=*LOCAL.newton.980624124210;application name=testapp;
package schema=NULLID;package name=SQLC28A1;package section=0;
object name=BOSS;object type=SCHEMA;access approval reason=DATABASE;
access attempted=CREATE;
timestamp=1998-06-24-08.42.41.957524;category=OBJMAINT;
audit event=CREATE_OBJECT;event correlator=3;event status=0;
database=FOO;userid=boss;authid=BOSS;
application id=*LOCAL.newton.980624124210;application name=testapp;
package schema=NULLID;package name=SQLC28A1;package section=0;
object schema=BOSS;object name=AUDIT;object type=TABLE;
timestamp=1998-06-24-08.42.42.018900;category=CONTEXT;
audit event=COMMIT;event correlator=3;database=FOO;userid=boss;
authid=BOSS;application id=*LOCAL.newton.980624124210;
application name=testapp;package schema=NULLID;
package name=SQLC28A1;
As you can see, there are a significant number of audit records generated from the
audit configuration that requests the auditing of all possible audit events and
types.
In most cases, you will configure the audit facility for a more restricted or focused
view of the events you wish to audit. For example, you may want to only audit
those events that fail. In this case, the audit facility could be configured as follows:
db2audit configure scope audit,checking,objmaint,secmaint,sysadmin,
validate status failure
Note: This configuration is the initial audit configuration or the one that occurs
when the audit configuration is reset.
After beginning the audit facility with this configuration, and then running the
testapp application, the following records are generated and placed in the audit log.
(And we assume testapp has not been run before.) By extracting the audit records
from the log, you will see the following records generated for the two actions
carried out by the application:
Action Type of Record Created
CONNECT
timestamp=1998-06-24-08.42.11.527490;category=VALIDATE;
audit event=CHECK_GROUP_MEMBERSHIP;event correlator=2;
event status=-1092;database=FOO;userid=boss;authid=BOSS;
execution id=newton;application id=*LOCAL.newton.980624124210;
application name=testapp;auth type=SERVER;
timestamp=1998-06-24-08.42.11.561187;category=VALIDATE;
timestamp=1998-06-24-08.42.11.594620;category=VALIDATE;
audit event=CHECK_GROUP_MEMBERSHIP;event correlator=2;
event status=-1092;database=FOO;userid=boss;authid=BOSS;
execution id=newton;application id=*LOCAL.newton.980624124210;
application name=testapp;auth type=SERVER;
CREATE TABLE
(none)
The are far fewer audit records generated from the audit configuration that
requests the auditing of all possible audit events (except CONTEXT) but only
when the event attempt fails. By changing the audit configuration you can control
the type and nature of the audit records that are generated.
The audit facility can allow you to create audit records when those you want to
audit have been successfully granted privileges on an object. In this case, you
could configure the audit facility as follows:
db2audit configure scope checking status success
After beginning the audit facility with this configuration, and then running the
testapp application, the following records are generated and placed in the audit log.
(And we assume testapp has not been run before.) By extracting the audit records
from the log, you will see the following records generated for the two actions
carried out by the application:
Action Type of Record Created
CONNECT
timestamp=1998-06-24-08.42.11.622984;category=CHECKING;
audit event=CHECKING_OBJECT;event correlator=2;event status=0;
database=FOO;userid=boss;authid=BOSS;
timestamp=1998-06-24-08.42.41.450975;category=CHECKING;
audit event=CHECKING_OBJECT;event correlator=2;event status=0;
database=FOO;userid=boss;authid=BOSS;
application id=*LOCAL.newton.980624124210;application name=testapp;
package schema=NULLID;package name=SQLC28A1;object schema=NULLID;
object name=SQLC28A1;object type=PACKAGE;
access approval reason=OBJECT;access attempted=EXECUTE;
timestamp=1998-06-24-08.42.41.539692;category=CHECKING;
audit event=CHECKING_OBJECT;event correlator=3;event status=0;
database=FOO;userid=boss;authid=BOSS;
application id=*LOCAL.newton.980624124210;application name=testapp;
package schema=NULLID;package name=SQLC28A1;package section=0;
object schema=BOSS;object name=AUDIT;object type=TABLE;
access approval reason=DATABASE;access attempted=CREATE;
timestamp=1998-06-24-08.42.41.570876;category=CHECKING;
audit event=CHECKING_OBJECT;event correlator=3;event status=0;
database=FOO;userid=boss;authid=BOSS;
application id=*LOCAL.newton.980624124210;application name=testapp;
package schema=NULLID;package name=SQLC28A1;package section=0;
object name=BOSS;object type=SCHEMA;access approval reason=DATABASE;
access attempted=CREATE;
CREATE TABLE
(none)
Related reference:
v “Audit facility usage” on page 624
Unless otherwise specified, all names can include the following characters:
v A through Z. When used in most names, characters A through Z are converted
from lowercase to uppercase.
v 0 through 9.
v ! % ( ) { } . – ^ ~ _ (underscore) @, #, $, and space.
v \ (backslash).
Do not use SQL reserved words to name tables, views, columns, indexes, or
authorization IDs.
There are other special characters that might work separately depending on your
operating system and where you are working with the DB2 database. However,
while they might work, there is no guarantee that they will work. It is not
recommended that you use these other special characters when naming objects in
your database.
User and group names also need to follow the rules forced on specific operation
systems by the related systems. For example, on Linux and UNIX platforms, user
names and primary group names must follow these rules:
v Allowed characters: lowercase a through z, 0 through 9, and _ (underscore) for
names not starting with 0 through 9.
v Length must be less than or equal to 8 characters.
You also need to consider object naming rules, workstation naming rules, naming
rules in an NLS environment, and naming rules in a Unicode environment.
Related concepts:
v “DB2 database object naming rules” on page 663
v “Federated database object naming rules” on page 666
v “User, user ID and group naming rules” on page 666
v “Workstation naming rules” on page 667
v Databases v Database names must be unique within the location in which they are cataloged. On
v Database aliases Linux and UNIX implementations of the DB2 database manager, this location is a
directory path, while on Windows implementations, it is a logical disk.
v Instances
v Database alias names must be unique within the system database directory. When a
new database is created, the alias defaults to the database name. As a result, you
cannot create a database using a name that exists as a database alias, even if there is
no database with that name.
v Database, database alias and instance names can have up to 8 bytes.
v On Windows, no instance can have the same name as a service name.
Note: To avoid potential problems, do not use the special characters @, #, and $ in a
database name if you intend to use the database in a communications environment.
Also, because these characters are not common to all keyboards, do not use them if you
plan to use the database in another language.
v Function mappings v Nicknames, mappings, index specifications, servers, and wrapper names cannot
v Index specifications exceed 128 bytes.
v Nicknames v Server and nickname options and option settings are limited to 255 bytes.
v Servers v Names for federated database objects can also include:
v Type mappings – Valid accented letters (such as ö)
v User mappings – Multibyte characters, except multibyte spaces (for multibyte environments)
v Wrappers
Related concepts:
v “General naming rules” on page 663
Related concepts:
v “General naming rules” on page 663
Notes:
1. Some operating systems allow case sensitive user IDs and passwords. You
should check your operating system documentation to see if this is the case.
2. The authorization ID returned from a successful CONNECT or ATTACH is
truncated to 8 characters. An ellipsis (...) is appended to the authorization ID
and the SQLWARN fields contain warnings to indicate truncation.
3. Trailing blanks from user IDs and passwords are removed.
Related concepts:
v “Federated database object naming rules” on page 666
v “General naming rules” on page 663
v Function mappings v Nicknames, mappings, index specifications, servers, and wrapper names cannot
v Index specifications exceed 128 bytes.
v Nicknames v Server and nickname options and option settings are limited to 255 bytes.
v Servers v Names for federated database objects can also include:
v Type mappings – Valid accented letters (such as ö)
v User mappings – Multibyte characters, except multibyte spaces (for multibyte environments)
v Wrappers
Related concepts:
v “General naming rules” on page 663
Related concepts:
v “General naming rules” on page 663
The “Password change” dialog of the DB2 Configuration Assistant (CA) can also
be used to change the password.
Related concepts:
v “Additional restrictions and recommendations regarding the use of schema
names” on page 667
v “DB2 database object naming rules” on page 663
v “Delimited identifiers and object names” on page 665
v “Federated database object naming rules” on page 666
v “General naming rules” on page 663
v “User, user ID and group naming rules” on page 666
v “Workstation naming rules” on page 667
In a partitioned database system, there is still only one workstation nname that
represents the entire partitioned database system, but each node has its own
derived unique NetBIOS nname.
The workstation nname that represents the partitioned database system is stored in
the database manager configuration file for the database partition server that owns
the instance.
Each node’s unique nname is a derived combination of the workstation nname and
the node number.
If a node does not own an instance, its NetBIOS nname is derived as follows:
1. The first character of the instance-owning machine’s workstation nname is used
as the first character of the node’s NetBIOS nname.
2. The next 1 to 3 characters represent the node number. The range is from 1 to
999.
3. The remaining characters are taken from the instance-owning machine’s
workstation nname. The number of remaining characters depends on the length
of the instance-owning machine’s workstation nname. This number can be from
0 to 4.
For example:
If you have changed the default workstation nname during the installation, the
workstation nname’s last 4 characters should be unique across the NetBIOS
network to minimize the chance of deriving a conflicting NetBIOS nname.
Related concepts:
v “General naming rules” on page 663
When naming database objects (such as tables and views), program labels, host
variables, cursors, and elements from the extended character set (for example,
letters with diacritical marks) can also be used. Precisely which characters are
available depends on the code page in use.
In DBCS environments, the extended character set consists of all the characters in
the basic character set, plus the following:
v All double-byte characters in each DBCS code page, except the double-byte
space, are valid letters.
v The double-byte space is a special character.
v The single-byte characters available in each mixed code page are assigned to
various categories as follows:
Related concepts:
v “DB2 database object naming rules” on page 663
v “General naming rules” on page 663
v “Workstation naming rules” on page 667
Clients can enter any character that is supported by their environment, and all the
characters in the identifiers will be converted to UTF-8 by the database manager.
Two points must be taken into account when specifying national language
characters in identifiers for a UCS-2 database:
v Each non-ASCII character requires two to four bytes. Therefore, an n-byte
identifier can only hold somewhere between n/4 and n characters, depending on
the ratio of ASCII to non-ASCII characters. If you have only one or two
non-ASCII (for example, accented) characters, the limit is closer to n characters,
while for an identifier that is completely non-ASCII (for example, in Japanese),
only n/4 to n/3 characters can be used.
v If identifiers are to be entered from different client environments, they should be
defined using the common subset of characters available to those clients. For
example, if a UCS-2 database is to be accessed from Latin-1, Arabic, and
Japanese environments, all identifiers should realistically be limited to ASCII.
Related concepts:
v “DB2 database system integration with Windows Management Instrumentation”
on page 672
Related reference:
v “Windows Management Instrumentation samples” in Samples Topics
The DB2 profile registry variables can be accessed by WMI by using the built-in
Registry provider.
The WMI Software Development Kit (WMI SDK) includes several built-in
providers:
v PerfMon provider
v Registry event provider
v Registry provider
v Windows event log provider
v Win32 provider
v WDM provider
The DB2 errors that are in the Event Logs can be accessed by WMI by using the
built-in Windows Event Log provider.
DB2 database system has a DB2 WMI Administration provider, and sample WMI
script files, to access the following managed objects:
1. Instances of the database server including those instances that are distributed.
The following operations can be done:
v Enumerate instances
v Configure database manager parameters
v Start/stop/query the status of the DB2 server service
v Setup or establish communication
2. Databases. The following operations can be done:
v Enumerate databases
v Configure database parameters
v Create/drop databases
v Backup/restore/roll forward databases
You will need to register the DB2 WMI provider with the system before running
WMI applications. Registration is done by entering the following commands:
v mofcomp %DB2PATH%\bin\db2wmi.mof
This command loads the definition of the DB2 WMI schema into the system.
v regsvr %DB2PATH%\bin\db2wmi.dll
This command registers the DB2 WMI provider COM DLL with Windows.
Related concepts:
v “Introduction to Windows Management Instrumentation (WMI)” on page 671
Related reference:
v “Windows Management Instrumentation samples” in Samples Topics
User accounts, user IDs, and passwords only need to be defined at the primary
domain controller to be able to access domain resources.
Note: Two-part user IDs are supported by the CONNECT statement and the
ATTACH command. The qualifier of the SAM-compatible user ID is the
NetBIOS style name which has a maximum length of 15 characters.
During the setup procedure when a Windows server is installed, you may select to
create:
v A primary domain controller in a new domain
v A backup domain controller in a known domain
v A stand-alone server in a known domain.
Selecting “controller” in a new domain makes that server the primary domain
controller.
The user may log on to the local machine, or when the machine is installed in a
Windows Domain, the user may log on to the Domain. To authenticate the user,
DB2 checks the local machine first, then the Domain Controller for the current
Domain, and finally any Trusted Domains known to the Domain Controller.
To illustrate how this works, suppose that the DB2 instance requires Server
authentication. The configuration is as follows:
Each machine has a security database, Security Access Management (SAM). DC1 is
the domain controller, in which the client machine, Ivan, and the DB2 server, Servr,
are enrolled. TDC2 is a trusted domain for DC1 and the client machine, Abdul, is a
member of TDC2’s domain.
Related concepts:
v “Groups and user authentication on Windows” on page 679
Related tasks:
v “Authentication with groups and domain security (Windows)” on page 681
v “Installing DB2 on a backup domain controller” on page 680
v “Using a backup domain controller with DB2 database systems” on page 677
Related concepts:
v “DB2 and Windows security introduction” on page 675
Note: Before attempting to connect to the DB2 database, ensure that DB2 Security
Service has been started. The Security Service is installed as part of the
Windows installation. DB2 is then installed and “registered” as a Windows
service however, it is not started automatically. To start the DB2 Security
Service, enter the NET START DB2NTSECSERVER command.
Related concepts:
v “DB2 and Windows security introduction” on page 675
Related concepts:
v “Groups and user authentication on Windows” on page 679
You specify the backup domain controller to the DB2 database system by setting
the DB2DMNBCKCTLR registry variable.
If you know the name of the domain for which DB2 database server is the backup
domain controller, use:
db2dmnbckctlr=<domain_name>
To have DB2 database system determine the domain for which the local machine is
a backup domain controller, use:
DB2DMNBCKCTLR=?
Note: DB2 database does not use an existing backup domain controller by default
because a backup domain controller can get out-of-sync with the primary
domain controller, causing a security exposure. Domain controllers get
out-of-sync when the primary domain controller’s security database is
updated but the changes are not propagated to a backup domain controller.
This can happen if there are network latencies or if the computer browser
service is not operational.
Related tasks:
v “Installing DB2 on a backup domain controller” on page 680
Related concepts:
v “Groups and user authentication on Windows” on page 679
v “Trust relationships between domains on Windows” on page 679
Related concepts:
v “Support for global groups (on Windows)” on page 677
v “Trust relationships between domains on Windows” on page 679
Related tasks:
v “Authentication with groups and domain security (Windows)” on page 681
Related reference:
v “User name and group name restrictions (Windows)” on page 678
Trust relationships are not transitive. This means that explicit trust relationships
need to be established in each direction between domains. For example, the
trusting domain may not necessarily be a trusted domain.
Related concepts:
v “Groups and user authentication on Windows” on page 679
v “Support for global groups (on Windows)” on page 677
Related reference:
v “User name and group name restrictions (Windows)” on page 678
Related concepts:
v “DB2 and Windows security introduction” on page 675
The advantage of having a backup domain controller, in this case, is that users are
authenticated faster and the LAN is not as congested as it would have been had
there been no BDC.
If the DB2DMNBCKCTLR profile registry variable is not set or is set to blank, the
DB2 server performs authentication at the primary domain controller.
The only valid declared settings for DB2DMNBCKCTLR are “?” or a domain
name.
Related tasks:
v “Using a backup domain controller with DB2 database systems” on page 677
The DB2 database system allows you to specify either a local group or a global
group when granting privileges or defining authority levels. A user is determined
to be a member of a group if the user’s account is defined explicitly in the local or
global group, or implicitly by being a member of a global group defined to be a
member of a local group.
db2set -all
If the DB2 database manager is not running on a domain controller, then you
should issue:
db2set -g DB2_GRP_LOOKUP=DOMAIN
This command tells the DB2 database system to use a domain controller in its own
domain to find the name of a domain controller in the accounts domain. That is,
when a DB2 database finds out that a particular user account is defined in domain
x, rather than attempting to locate a domain controller for domain x, it sends that
request to a domain controller in its own domain. The name of the domain
controller in the account domain will be found and returned to the machine the
DB2 database is running on. There are two advantages to this method:
1. The nearest domain controller is found when the primary domain controller is
unavailable.
2. The nearest domain controller is found when the primary domain controller is
geographically remote.
Related concepts:
v “Acquiring Windows users’ group information using an access token” on page
483
v “Groups and user authentication on Windows” on page 679
Procedure:
To prevent the difficulties arising from the possibility of multiple users with the
same user ID across across a domain forest, you should use an ordered domain list
as defined using the db2set and the registry variable DB2DOMAINLIST. When
setting the order, the domains to be included in the list are separated by a comma.
You must make a conscious decision regarding the order that the domains are
searched when authenticating users.
Those user IDs that are present on domains further down the domain list will have
to be renamed by you if they are to be authenticated for access.
Control of access can be done through the domain list. For example, if the domain
of a user is not in the list, the user will not be allowed to connect.
Related concepts:
v “DB2 and Windows security introduction” on page 675
Note that the user name and local or global group do not need to be defined on
the domain where the database server is running, but they must be on the same
domain as each other.
Table 94. Successful Connection Using a Domain Controller
Domain1 Domain2
A trust relationship exists with Domain2. v A trust relationship exists with Domain1.
v The local or global group grp2 is defined.
v The user name id2 is defined.
v The user name id2 is part of grp2.
Related concepts:
v “Groups and user authentication on Windows” on page 679
Related tasks:
v “Authentication with groups and domain security (Windows)” on page 681
Related tasks:
v “Accessing remote DB2 database performance information” on page 688
v “Displaying DB2 database and DB2 Connect performance values” on page 687
v “Enabling remote access to DB2 performance information” on page 686
v “Registering DB2 with the Windows performance monitor” on page 685
v “Resetting DB2 performance values” on page 688
Related reference:
v “Windows performance objects” on page 687
The setup program automatically registers DB2 with the Windows Performance
Monitor for you.
To make DB2 database and DB2 Connect performance information accessible to the
Windows Performance Monitor, you must register the DLL for the DB2 for
Windows Performance Counters. This also enables any other Windows application
using the Win32 performance APIs to get performance data.
To install and register the DB2 for Windows Performance Counters DLL
(DB2Perf.DLL) with the Windows Performance Monitor, type:
Registering the DLL also creates a new key in the services option of the registry.
One entry gives the name of the DLL, which provides the counter support. Three
other entries give names of functions provided within that DLL. These functions
include:
v Open
Called when the DLL is first loaded by the system in a process.
v Collect
Called to request performance information from the DLL.
v Close
Called when the DLL is unloaded.
Related reference:
v “db2perfi - Performance counters registration utility command” in Command
Reference
In order to see Windows performance objects from another DB2 for Windows
computer, you must register an administrator username and password with the
DB2 database manager. (The default Windows Performance Monitor username,
SYSTEM, is a DB2 database reserved word and cannot be used.) To register the
name, type:
db2perfr -r username password
Note: The username used must conform to the DB2 database naming rules.
The username and password data is held in a key in the registry, with security that
allows access only by administrators and the SYSTEM account. The data is
encoded to prevent security concerns about storing an administrator password in
the registry.
Notes:
1. Once a username and password combination has been registered with the DB2
database system, even local instances of the Performance Monitor will explicitly
log on using that username and password. This means that if the username
information registered with DB2 database system does not match, local sessions
of the Performance Monitor will not show DB2 database performance
information.
2. The username and password combination must be maintained to match the
username and password values stored in the Windows Security database. If the
username or password is changed in the Windows Security database, the
username and password combination used for remote performance monitoring
must be reset.
3. To deregister, type:
db2perfr -u <username> <password>
Related reference:
v “db2perfr - Performance monitor registration tool command” in Command
Reference
To display DB2 database and DB2 Connect performance values using the
Performance Monitor, simply choose the performance counters whose values you
want displayed from the Add to box. This box displays a list of performance
objects providing performance data. Select an object to see a list of the counters it
supplies.
A performance object can also have multiple instances. For example, the
LogicalDisk object provides counters such as “% Disk Read Time” and “Disk
Bytes/sec”; it also has an instance for each logical drive in the computer, including
“C:” and “D:”.
Related concepts:
v “Windows performance monitor introduction” on page 685
Related reference:
v “Windows performance objects” on page 687
Related concepts:
v “Windows performance monitor introduction” on page 685
When an application calls the DB2 monitor APIs, the information returned is
normally the cumulative values since the DB2 database server was started.
However, often it is useful to:
v Reset performance values
v Run a test
v Reset the values again
v Re-run the test.
db2perfc
db2perfc dbalias1 dbalias2 ... dbaliasn
db2perfc -d
db2perfc -d dbalias1 dbalias2 ... dbaliasn
The first example resets performance values for all active DB2 databases. The next
example resets values for specific DB2 databases. The third example resets
performance values for all active DB2 DCS databases. The last example resets
values for specific DB2 DCS databases.
The db2perfc program resets the values for ALL programs currently accessing
database performance information for the relevant DB2 database server instance
(that is, the one held in DB2INSTANCE in the session in which you run db2perfc.
Invoking db2perfc also resets the values seen by anyone remotely accessing DB2
database performance information when the db2perfc command is executed.
Note: There is a DB2 database API, sqlmrset, that allows an application to reset the
values it sees locally, not globally, for particular databases.
Related reference:
v “db2perfc - Reset database performance values command” in Command Reference
v “db2ResetMonitor API - Reset the database system monitor data” in
Administrative API Reference
IBM periodically makes documentation updates available. If you access the online
version on the DB2 Information Center at ibm.com®, you do not need to install
documentation updates because this version is kept up-to-date by IBM. If you have
installed the DB2 Information Center, it is recommended that you install the
documentation updates. Documentation updates allow you to update the
information that you installed from the DB2 Information Center CD or downloaded
from Passport Advantage as new information becomes available.
Note: The DB2 Information Center topics are updated more frequently than either
the PDF or the hard-copy books. To get the most current information, install
the documentation updates as they become available, or refer to the DB2
Information Center at ibm.com.
You can access additional DB2 technical information such as technotes, white
papers, and Redbooks™ online at ibm.com. Access the DB2 Information
Management software library site at http://www.ibm.com/software/data/sw-
library/.
Documentation feedback
We value your feedback on the DB2 documentation. If you have suggestions for
how we can improve the DB2 documentation, send an e-mail to
db2docs@ca.ibm.com. The DB2 documentation team reads all of your feedback, but
cannot respond to you directly. Provide specific examples wherever possible so
that we can better understand your concerns. If you are providing feedback on a
specific topic or help file, include the topic title and URL.
Do not use this e-mail address to contact DB2 Customer Support. If you have a
DB2 technical issue that the documentation does not resolve, contact your local
IBM service center for assistance.
Related tasks:
v “Invoking command help from the command line processor” in Command
Reference
v “Invoking message help from the command line processor” in Command
Reference
v “Updating the DB2 Information Center installed on your computer or intranet
server” on page 697
Related reference:
v “DB2 technical library in hardcopy or PDF format” on page 692
Although the tables identify books available in print, the books might not be
available in your country or region.
The information in these books is fundamental to all DB2 users; you will find this
information useful whether you are a programmer, a database administrator, or
someone who works with DB2 Connect or other DB2 products.
Table 95. DB2 technical information
Name Form Number Available in print
Administration Guide: SC10-4221 Yes
Implementation
Administration Guide: Planning SC10-4223 Yes
Administrative API Reference SC10-4231 Yes
Administrative SQL Routines and SC10-4293 No
Views
Call Level Interface Guide and SC10-4224 Yes
Reference, Volume 1
Call Level Interface Guide and SC10-4225 Yes
Reference, Volume 2
Command Reference SC10-4226 No
Data Movement Utilities Guide SC10-4227 Yes
and Reference
Data Recovery and High SC10-4228 Yes
Availability Guide and Reference
Developing ADO.NET and OLE SC10-4230 Yes
DB Applications
Developing Embedded SQL SC10-4232 Yes
Applications
Note: The DB2 Release Notes provide additional information specific to your
product’s release and fix pack level. For more information, see the related
links.
Related concepts:
v “Overview of the DB2 technical information” on page 691
v “About the Release Notes” in Release notes
Related tasks:
v “Ordering printed DB2 books” on page 694
Printed versions of many of the DB2 books available on the DB2 PDF
Documentation CD can be ordered for a fee from IBM. Depending on where you
are placing your order from, you may be able to order books online, from the IBM
Publications Center. If online ordering is not available in your country or region,
you can always order printed DB2 books from your local IBM representative. Note
that not all books on the DB2 PDF Documentation CD are available in print.
Procedure:
Related concepts:
v “Overview of the DB2 technical information” on page 691
Related reference:
v “DB2 technical library in hardcopy or PDF format” on page 692
Procedure:
To invoke SQL state help, open the command line processor and enter:
? sqlstate or ? class code
where sqlstate represents a valid five-digit SQL state and class code represents the
first two digits of the SQL state.
For example, ? 08003 displays help for the 08003 SQL state, and ? 08 displays help
for the 08 class code.
Related tasks:
v “Invoking command help from the command line processor” in Command
Reference
v “Invoking message help from the command line processor” in Command
Reference
For DB2 Version 8 topics, go to the Version 8 Information Center URL at:
http://publib.boulder.ibm.com/infocenter/db2luw/v8/.
Related tasks:
v “Setting up access to DB2 contextual help and documentation” on page 435
Procedure:
Note: Adding a language does not guarantee that the computer has the fonts
required to display the topics in the preferred language.
v To move a language to the top of the list, select the language and click the
Move Up button until the language is first in the list of languages.
3. Clear the browser cache and then refresh the page to display the DB2
Information Center in your preferred language.
On some browser and operating system combinations, you might have to also
change the regional settings of your operating system to the locale and language of
your choice.
Related concepts:
To determine if there is an update available for the entire DB2 Information Center,
look for the 'Last updated' value on the Information Center home page. Compare
the value in your locally installed home page to the date of the most recent
downloadable update at http://www.ibm.com/software/data/db2/udb/support/
icupdate.html. You can then update your locally-installed Information Center if a
more recent downloadable update is available.
Note: Updates are also available on CD. For details on how to configure your
Information Center to install updates from CD, see the related links.
If update packages are available, use the Update feature to download the
packages. (The Update feature is only available in stand-alone mode.)
3. Stop the stand-alone Information Center, and restart the DB2 Information
Center service on your computer.
Procedure:
Note: The help_end batch file contains the commands required to safely
terminate the processes that were started with the help_start batch file.
Do not use Ctrl-C or any other method to terminate help_start.bat.
v On Linux, run the help_end script using the fully qualified path for the DB2
Information Center:
<DB2 Information Center dir>/doc/bin/help_end
Related concepts:
v “DB2 Information Center installation options” in Quick Beginnings for DB2 Servers
Related tasks:
v “Installing the DB2 Information Center using the DB2 Setup wizard (Linux)” in
Quick Beginnings for DB2 Servers
v “Installing the DB2 Information Center using the DB2 Setup wizard (Windows)”
in Quick Beginnings for DB2 Servers
You can view the XHTML version of the tutorial from the Information Center at
http://publib.boulder.ibm.com/infocenter/db2help/.
Some lessons use sample data or code. See the tutorial for a description of any
prerequisites for its specific tasks.
DB2 tutorials:
Related concepts:
v “Visual Explain overview” on page 451
Related concepts:
v “Introduction to problem determination” in Troubleshooting Guide
v “Overview of the DB2 technical information” on page 691
Personal use: You may reproduce these Publications for your personal, non
commercial use provided that all proprietary notices are preserved. You may not
distribute, display or make derivative work of these Publications, or any portion
thereof, without the express consent of IBM.
Commercial use: You may reproduce, distribute and display these Publications
solely within your enterprise provided that all proprietary notices are preserved.
You may not make derivative works of these Publications, or reproduce, distribute
or display these Publications or any portion thereof outside your enterprise,
without the express consent of IBM.
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the Publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country/region or send inquiries, in
writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan
The following paragraph does not apply to the United Kingdom or any other
country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions; therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product, and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information may contain examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious, and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work must
include a copyright notice as follows:
Trademarks
Company, product, or service names identified in the documents of the DB2
Version 9 documentation library may be trademarks or service marks of
International Business Machines Corporation or other companies. Information on
the trademarks of IBM Corporation in the United States, other countries, or both is
located at http://www.ibm.com/legal/copytrade.shtml.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Intel®, Itanium®, Pentium®, and Xeon® are trademarks of Intel Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Index 707
database 3 database partition servers DB2 Administration Server (DAS)
before creating 3 description 30 (continued)
changing 281 dropping 147 starting and stopping 94
considerations before changing 275 issuing commands 130 update configuration 110
considerations for creating 33 specifying 138 updating on UNIX 102
creating 113 Windows 143 using Configuration Assistant and
database access database partitions Control Center 110
controlling 481 adding 123, 125 DB2 administration tools
database administration (DBADM) to a running system 119 setting hover help 435
authority to a stopped system 120 DB2 Administration Tools
description 509 to NT system 122 starting 369
database authorities adding using a wizard 124 DB2 copies
BINDADD 511 cataloging 9, 179 changing the default copy after
CONNECT 511 changing 145 installation 21
CREATE_EXTERNAL_ROUTINE 511 changing database configuration 281 differences 17
CREATE_NOT_FENCED 511 changing in database partition managing 26
CREATETAB 511 group 281 restrictions 17
database manager (DBADM) 511 creating database across all 9 roadmap 15
granting 511 dropping from an instance 127 setting the DAS 24
granting to new groups 529 managing 116 setting the default instance 25
granting to new users 529 from the Control Center 282 uninstalling 28
IMPLICIT_SCHEMA 511 database recovery log using on same computer 17
LOAD 511 defining 182 DB2 database help
PUBLIC 511 database server capacity using 370
QUIESCE_CONNECT 511 methods of expanding 29 DB2 environment
revoking 511 database servers automatically set
SECADM 511 alternate 49 UNIX 43
security administrator database systems manually set
(SECADM) 511 cataloging 177 UNIX 44
database configuration databases DB2 for Windows Performance
changing 279 access Counters 685
changing across partitions 281 privileges through package with DB2 Health Monitor
database configuration file queries 523 alert summaries 62
creating 80 altering database partition group 281 DB2 Help menu 375
database configuration parameters automatic storage 54 DB2 information Center
generating recommended values 84 cataloging 176 setting up access 435
database directories changing distribution of data 281 DB2 Information Center
updating 180 creating across all database updating 697
database manager partitions 9 versions 696
access control 519 directory information, changing 180 viewing in different languages 696
binding utilities 183 dropping 293 DB2 objects
configuration parameters enabling data partitioning 9 naming rules 663
generating recommended enabling I/O parallelism 11 DB2 tools catalog
values 84 label-based access control creating a database for 424
index 256 (LBAC) 538 DB2 Tools menu 374
starting on UNIX 4 package dependencies 366 DB2 UDB for z/OS health monitor
starting on Windows 4 quiescing 186 overview 441
stopping on UNIX 13 restore implications 59 starting, stopping, refreshing 442
stopping on Windows 14 unavailable status 391 viewing alert objects 447
database objects unquiescing 186 viewing alert summaries 446
access control 519 DB_HISTORY administrative view 87 viewing, submitting, saving
creating 382 DB2 Administration Server (DAS) recommended actions 443
modifying configuration 106 DB2 Universal JDBC driver
statement dependencies 366 configuring 95 client reroute support 45
naming rules creating 93 db2_all command 130, 131, 132
NLS 668 enabling discovery 107 overview 130
Unicode 669 listing 95 db2_call_stack 131
database partition groups notification and contact list DB2_CONNRETRIES_INTERVAL 70
altering 281 setup 100 registry variable 52
creating 115 overview 91 DB2_INDEX_TYPE2 70
distribution key, changing 318 ownership rules 79 db2_kill 131
IBMDEFAULTGROUP default removing 103 DB2_LIC_STAT_SIZE 70
table 191 scheduler setup and configuration 96 DB2_MAX_CLIENT_ CONNRETRIES
initial definition 115 security considerations 102 registry variable 52
table considerations 191 setting up with partitioned database DB2_MAX_CLIENT_CONNRETRIES 70
database partition number 81 system 104 DB2_VIEW_REOPT_VALUES 70
example 104 DB2_WORKLOAD 75
Index 709
examples
alternate server 49
foreign keys
adding or changing 297
H
automatic client reroute 49 adding to a table 310 Health Center Status Beacon
EXECUTE privilege changing 311 enabling or disabling notification 448
database access with dynamic composite 226 help
queries 523 constraint name 226 accessing 435
database access with static DROP FOREIGN KEY clause, ALTER displaying 696
queries 523 TABLE statement 316 for DB2 administration tools 370
definition 517, 518 import utility, referential integrity for SQL statements 695
explain snapshot implications for 227 how to access
definition 456 load utility, referential integrity in the Control Center 385
explain tables implications for 227 hierarchy tables
creating 466 privileges required for dropping 316 creating 216
explainable statements rules for defining 226 dropping 363
definition 456 format historical information
viewing 474 security label as string 549 viewing in Journal 418
explained SQL statements function invocation, selectivity 268 history file
definition 456 function mappings accessing 87
viewing history 476 creating 244 hover help
explained XQuery statements function privileges 518 setting for DB2 administration
definition 456 function statistics tools 435
viewing history 476 viewing SQL or XQuery statement
explicit schema use 6 statistics 469
expressions function templates I
NEXTVAL 234 creating 245 I/O parallelism
PREVVAL 234 functions enabling 11
extended security DECRYPT 527 IBM eNetwork Directory, object classes
Windows 486 dropping a user-defined 333 and attributes 598
ENCRYPT 527 IBM SecureWay Directory Server
GETHINT 527 extending the directory schema
F for 595
fast communications manager (FCM) IBMCATGROUP database partition
service entry syntax 32 G group 115
FCM (fast communications manager) generated columns IBMDEFAULTGROUP database partition
service entry syntax 32 defining on a new table 219 group 115
federated databases modifying 308 IBMTEMPGROUP database partition
function mapping, creating 244 global group support group 115
function template, creating 245 Windows 677 identity columns
index specification global level profile registry 65 altering 318
creating 258 GRANT statement defining on a new table 220
object naming rules 666 example 519 modifying 308
type mapping, creating 248 implicit issuance 522 IDENTITY columns 235
federated systems objects use of 519 modifying 308
adding to the Control Center object granting implicit authorization
tree 389 LBAC security labels 547 managing 522
file system caching granting database authorities implicit schema authority
for table spaces 159 to new groups 529 (IMPLICIT_SCHEMA) 513
filtering to new users 529 implicit schema use 6
objects in the Control Center 394 granting privileges IMPLICIT_SCHEMA
finding to new groups 530 authority 168
objects to new users 534 database authority 511
in the Control Center contents group information IMS
pane 394 access token 483 setting options 450
firewalls grouping tasks 425 index extension 258
application proxy 620 groups index keys 258
circuit level 620 naming rules 666 index maintenance
description 619 selecting 481 details 266
screening router 619 selecting new tasks for 427 index privilege 518
stateful multi-layer inspection groups and user authentication INDEX privilege 515
(SMLI) 620 Windows 679 index searching
first failure data capture (FFDC) guidelines details 266
on DAS 111 range-clustered tables 216 index statistics
fonts viewing SQL or XQuery statement
changing for menus and text 437 statistics 469
foreign key constraints index type
referential constraints 226 unique index 258
rules for defining 226
Index 711
LDAP (Lightweight Directory Access
Protocol) (continued)
local database directory
description 178
N
security 589 viewing 179 naming conventions
setting registry variables 587 local system account 485 restrictions
supporting 575 LOCK TABLE statement general 663
updating protocol information 580 when using CREATE INDEX 261 Windows 678
Windows 2000 active directory 591 log files naming rules
LDAP clients administration 182 DB2 objects 663
rerouting 580 logging delimited identifiers and object
legend raw devices 163 names 665
Control Center 380 logical nodes; see database partition federated database objects 666
length limits servers 30, 138 general 663
source data types 250 logs national languages 668
LEVEL2 PCTFREE clause 261 audit 621 objects and users 489
License Center Policy Evaluation 441 restrictions 663
definition 411 LONGDATACOMPAT schema names 667
managing licenses 64 retrieving access plan 468 Unicode 669
overview 411 users, user IDs and groups 666
viewing user details 415 workstations 667
Netscape
license policies
viewing 414
M LDAP directory support 593
machine list NEXTVAL expression 234
licenses
for partitioned database nicknames
adding 412
environment 137 privileges
changing 413
materialized query tables indirect through packages 524
removing 416
behavior 206 NO FILE SYSTEM CACHING
licensing information
with partitioned tables 206 clause 159
viewing 413
materialized query tables (MQTs) node configuration files
lightweight directory access protocol
altering properties 335 creating 81
(LDAP)
creating 201 node directories 179
attaching remotely 583
dropping 365 node level profile registry 65
cataloging a node entry 581
populating 205 nodegroups (database partition groups)
configuring DB2 576
refreshing data 336 creating 115
creating a user 577
user-maintained 204, 205 non-buffered I/O
DB2 Connect 589
maxRetriesForClientReroute 45 enabling on UNIX 159
deregistering
menus nonprimary indexes
databases 584
changing fonts 437 dropping 327
servers 582
DB2 Help 375 notices 701
description 573
DB2 Tools 374 notification message
directory service 181
MERGE statement default 423
disabling 589
updating table and view notifications
enabling 588
contents 360 changing the default message 423
extending directory schema 591
messages enabling or disabling
object classes and attributes 598
audit facility 636 using the Health Center Status
refreshing entries 584
default notification, changing 423 Beacon 448
registering
viewing in Journal 418 viewing in Journal 418
databases 582
method privileges 518 null column definition 217
DB2 servers 578
MINPCTUSED clause 261
host databases 586
modifying a table 295
searching
directory domains 585
monitoring
rah processes 134
O
directory partitions 585 object tree
MQTs (materialized query tables)
security 589 adding databases 390
altering properties 335
setting registry variables 587 adding IMSplexes 390
creating 201
supporting 575 adding instances 390
dropping 365
updating protocol information 580 adding systems 390
populating 205
Windows 2000 active directory 591 adding z/OS subsystems 389
refreshing data 336
LOAD database authority 511 expanding and collapsing 389
user-maintained 204, 205
LOAD privilege 511 refreshing objects 391
multiple DB2 copies
Load wizard objects
roadmap 15
loading data into a table 237 modifying
setting the default instance 25
loading data statement dependencies 366
multiple instances 16
enabling parallelism 10 performance on Windows 687
UNIX 36
into a table schemas for grouping 6
Windows 37
using a Load wizard 237 objects in custom folders
multiple logical nodes
LOB (large object) data types deleting 391
configuring 31
column considerations 221 operand
definition 457
Index 713
Q recommended actions
viewing, submitting, saving 443
related objects
showing
qualified object names 6 records in the Control Center 270
queries audit 621 validating 271
rewrite, materialized query table 201 recovery remote
query optimization class allocating log during database administration 104
definition 460 creation 182 performance 688
QUIESCE_CONNECT database summary tables, inoperative 361 removing
authority 511 views, inoperative 331 columns 320
quiescing redistributing data 128 renaming
databases 186 across database partitions 281 indexes 326
instances 42 referenced column groups table spaces 290
tables 239 viewing SQL or XQuery statement tables 326
details 469 REORG-recommended alter 300
referenced columns reorganization utility
R viewing SQL or XQuery statement binding to a database 183
rah command details 469 rerouting clients 44
controlling 139 REFERENCES clause LDAP 580
description 131 delete rules 227 resizing
determining problems 141 use of 227 table space 154
environment variables 139 REFERENCES privilege 515 restore database
introduction 130 referential constraints implications 59
monitoring processes 134 defining 224 Restore wizard
overview 130 PRIMARY KEY clause, restoring data 388
prefix sequences 135 CREATE/ALTER TABLE restoring
RAHCHECKBUF environment statements 224 data
variable 133 REFERENCES clause, using the Restore wizard 388
RAHDOTFILES environment CREATE/ALTER TABLE databases, enabling I/O
variable 140 statements 224 parallelism 11
RAHOSTFILE environment refreshing table spaces, enabling I/O
variable 137 data in materialized query table 336 parallelism 11
RAHOSTLIST environment DB2 UDB for z/OS health RESTRICT semantic
variable 137 monitor 442 for DROP COLUMN 300
RAHWAITTIME environment objects in the object tree 391 restrictions
variable 134 registry variables automatic storage 62
recursively invoked 135 aggregate 75 naming
running commands in parallel 133 DB2_CONNRETRIES_ INTERVAL 52 Windows 678
setting the default environment db2_connretries_interval 45 RESTRICTIVE option, CREATE
profile 141 DB2_CONNRETRIES_INTERVAL 70 DATABASE 613
specifying DB2_INDEX_TYPE2 70 retryIntervalForClientReroute 45
as a parameter or response 132 DB2_LIC_STAT_SIZE 70 REVOKE statement
database partition server list 137 DB2_MAX_CLIENT_ example 521
RAHCHECKBUF environment CONNRETRIES 52, 70 implicit issuance 522
variable 133 db2_max_client_connretries 45 use 521
RAHDOTFILES environment DB2_VIEW_REOPT_VALUES 70 revoking
variable 140 DB2ACCOUNT 70 LBAC security labels 547
RAHOSTFILE environment variable 137 DB2BIDI 70 roadmaps
RAHOSTLIST environment variable 137 DB2CODEPAGE 70 automatic client reroute 45
RAHTREETHRESH environment DB2DBDFT 70 multiple DB2 copies 15
variable 135 DB2DBMSADDR 70 row blocking
RAHWAITTIME environment DB2DISCOVERYTIME 70 see cursor blocking 460
variable 134 DB2GRAPHICUNICODESERVER 70 row compression 188
range-clustered tables DB2INCLUDE 70 definition 187
access path determination 215 DB2INSTDEF 70 rows
examples 213 DB2INSTOWNER 70 deleting LBAC protected 569
guidelines 216 DB2LOCALE 70 effect of LBAC on reading 560
ranges DB2NBDISCOVERRCVBUFS 70 inserting LBAC protected 563
defining for data partitions 195 DB2SLOGON 70 protecting a row with LBAC 558
generating 195 DB2TERRITORY 70 updating LBAC protected 565
restrictions 195 DB2TRACEFLUSH 70 rule sets (LBAC)
raw devices 149 DB2TRACENAME 70 description 551
raw I/O DB2TRACEON 70 exemptions 556
setting up on Linux 164 DB2TRCSYSERR 70 running tasks
specifying 163 DB2YIELD 70 immediately 421
raw logs 163 declaring 68 runstats
rebalancing data across containers 285 environment variables 65 using 468
Index 715
system catalogs table spaces (continued) tables (continued)
dropping privileges 515 tips for adding constraints 309, 310
tables 363 renaming 290 updating using MERGE
view implications 330 resizing container 286 statement 360
privileges listing 609 separating types of data, volatile, declaring 323
retrieving example 190 task categories
authorization names with switching states 291 managing 431
privileges 610 system temporary 158 Task Center
names with DBADM temporary automatic storage 57 creating tasks 425
authority 611 user temporary 159 description 416
names with table access viewing SQL or XQuery statement editing tasks 425
authority 612 details 469 enabling scheduling settings 419
privileges granted to names 613 without file system caching 159 overview 416
security 613 table statistics purging task history records 416
system control authority (SYSCTRL) 507 viewing SQL or XQuery statement tasks
system database directory statistics 469 authorizations 608
overview 178 table user-defined functions (UDFs) creating or editing 425
viewing 179 description 243 creatng or editing 425
system maintenance authority tables running immediately 421
(SYSMAINT) 508 add referential constraints 309, 310 running now 421
system monitor authority adding scheduling 420, 422
(SYSMON) 510 columns, new 304 viewing in Journal 418
system names ALTER TABLE statement 304 TCP_KEEPALIVE
changing 383 altering partitioned tables 356, 358 operating system configuration
system temporary table spaces 158 altering using stored procedures 324 parameter 53
changing temporary tables
distribution keys 318 dropping a user-defined 364
T changing attributes 298
converting 198
user-defined 212
TEMPSPACE1 table space 148
table
copying 296 termination character
altering 295
CREATE TABLE statement 217 setting for command statement 434
table function statistics
creating 187 terms and conditions
viewing SQL or XQuery statement
in partitioned databases 191 use of publications 700
statistics 469
creation toolbars
table information
overview 189 primary 371
displaying in the Control Center
defining secondary 373
contents pane 392
check constraints 228 tools
table objects
dimensions 235 catalog database 96
altering 295
referential constraints 224 Tools Settings
creating 187
unique constraints 223 overview 432
table partitions
dropping 363 trace facility 393
managing 297
effect of LBAC on reading 560 triggers
table properties
estimating space requirements 272 benefits 240
changing 299
explain creating 240
table spaces
creating 466 dependencies 242
adding
generated columns 219, 321 dropping 329
containers 285
identity columns 220 updates
automatic resizing 154
inserting into LBAC protected 563 update view contents 328
automatic storage 58
loading data using the Load troubleshooting
automatic storage, regular and
wizard 237 online information 699
large 63
making fully accessible 238 tutorials 699
changing 284
materialized query tables 206 trust relationships 679
containers
migrating into partitioned tables 198 trusted clients
extending 286
mismatch 210 CLIENT level security 490
file example 149
naming 217 tutorials
file system example 149
partitioned tables 206 troubleshooting and problem
creating
protecting with LBAC 538, 558 determination 699
description 149
quiescing 239 Visual Explain 699
in database partition groups 163
range-clustered 216 type mapping
definition 148
removing creating 248
device container example 149
rows 307 dropping 334
dropped table recovery 149
renaming 326 typed tables
dropping
retrieving names with access to 612 creating 216
system temporary 292
revoking privileges 521 deleting rows 362
user 291
source 210 populating 217
user temporary 293
staging, deleting contents of 362 updating rows 362
enabling I/O parallelism 11
target 210
initial 148
Index 717
718 Administration Guide: Implementation
Contacting IBM
To contact IBM in your country or region, check the IBM Directory of Worldwide
Contacts at http://www.ibm.com/planetwide
Printed in USA
SC10-4221-00
Spine information: