Netezza System Admin Guide
Netezza System Admin Guide
Netezza System Admin Guide
Release 7.2.1.1
IBM
Note
Before using this information and the product it supports, read the information in “Notices” on page D-1
Contents v
Criteria for selecting distribution keys . . .. 12-7 The nzbackup command . . . . . . . .. 13-11
Choose a distribution key for a subset table 12-7 Command syntax for nzbackup. . . . .. 13-12
Distribution keys and collocated joins . . .. 12-8 Specifying backup privileges . . . . .. 13-15
Dynamic redistribution or broadcasts . . .. 12-8 Examples of the nzbackup command . . .. 13-15
Verify distribution . . . . . . . . .. 12-8 Backup archive directory . . . . . . .. 13-17
Data skew . . . . . . . . . . . . .. 12-10 Incremental backups . . . . . . . .. 13-18
Specify distribution keys . . . . . . .. 12-10 Backup History report . . . . . . . .. 13-20
View data skew . . . . . . . . . .. 12-11 Back up and restore users, groups, and
Clustered base tables . . . . . . . . .. 12-12 permissions . . . . . . . . . . .. 13-21
Organizing keys and zone maps . . . .. 12-13 The nzrestore command . . . . . . . .. 13-22
Select organizing keys . . . . . . . .. 12-14 The nzrestore command syntax . . . . .. 13-23
Reorganize the table data . . . . . . .. 12-14 Specifying restore privileges . . . . . .. 13-28
Copy clustered base tables . . . . . .. 12-15 Examples of the nzrestore command . . .. 13-29
Database statistics . . . . . . . . . .. 12-15 Database statistics after restore . . . . .. 13-30
Maintain table statistics automatically . .. 12-16 Restore tables. . . . . . . . . . .. 13-30
GENERATE STATISTICS command . . .. 12-17 Incremental restoration . . . . . . .. 13-31
Just in Time statistics . . . . . . . .. 12-17 Veritas NetBackup connector . . . . . .. 13-34
Zone maps . . . . . . . . . . .. 12-18 Installing the Veritas NetBackup license . .. 13-34
Groom tables . . . . . . . . . . . .. 12-20 Configuring NetBackup for a Netezza client 13-35
GROOM and the nzreclaim command . .. 12-20 Integrate Veritas NetBackup to Netezza . .. 13-36
Identify clustered base tables that require NetBackup troubleshooting . . . . . .. 13-40
grooming . . . . . . . . . . . .. 12-21 Procedures for backing up and restoring by
Organization percentage . . . . . . .. 12-22 using Veritas NetBackup . . . . . . .. 13-40
Groom and backup synchronization . . .. 12-23 IBM Spectrum Protect (formerly Tivoli Storage
Session management . . . . . . . . .. 12-23 Manager) connector . . . . . . . . .. 13-42
The nzsession command . . . . . . .. 12-23 Tivoli Storage Manager backup integration 13-43
Transactions . . . . . . . . . . . .. 12-25 Tivoli Storage Manager encrypted backup
Transaction control and monitoring . . .. 12-25 support. . . . . . . . . . . . .. 13-43
Transactions per system . . . . . . .. 12-25 Configuring the Netezza host . . . . .. 13-43
Transaction concurrency and isolation . .. 12-26 Configure the Tivoli Storage Manager server 13-48
Concurrent transaction serialization and Special considerations for large databases 13-54
queueing, implicit transactions . . . . .. 12-26 The nzbackup and nzrestore commands with
Concurrent transaction serialization and the Tivoli Storage Manager connector. . .. 13-57
queueing, explicit transactions . . . . .. 12-27 Host backup and restore to the Tivoli Storage
Netezza optimizer and query plans . . . .. 12-28 Manager server . . . . . . . . . .. 13-57
Execution plans . . . . . . . . . .. 12-28 Backing up and restoring data by using the
Display plan types . . . . . . . . .. 12-28 Tivoli Storage Manager interfaces . . . .. 13-58
Analyze query performance . . . . . .. 12-29 Troubleshooting . . . . . . . . . .. 13-60
Query status and history . . . . . . . .. 12-30 EMC NetWorker connector . . . . . . .. 13-61
Preparing your system for EMC NetWorker
Chapter 13. Database backup and integration. . . . . . . . . . . .. 13-62
restore . . . . . . . . . . . . .. 13-1 NetWorker installation. . . . . . . .. 13-62
NetWorker configuration . . . . . . .. 13-62
General information about backup and restore
NetWorker backup and restore . . . . .. 13-64
methods . . . . . . . . . . . . . .. 13-1
Host backup and restore . . . . . . .. 13-66
Backup options overview . . . . . . .. 13-2
NetWorker troubleshooting . . . . . . .. 13-67
Database completeness . . . . . . . .. 13-3
Portability . . . . . . . . . . . .. 13-3
Compression in backups and restores . . .. 13-4 Chapter 14. History data collection 14-1
Multi-stream backup. . . . . . . . .. 13-4 Types of history databases . . . . . . . .. 14-1
Multi-stream restore . . . . . . . . .. 13-5 History database versions . . . . . . . .. 14-2
Special columns . . . . . . . . . .. 13-6 History-data staging and loading processes . .. 14-2
Upgrade and downgrade concerns . . . .. 13-6 History-data files . . . . . . . . . .. 14-3
Compressed unload and reload . . . . .. 13-7 History log files . . . . . . . . . .. 14-4
Encryption key management in backup and History event notifications . . . . . . .. 14-4
restore . . . . . . . . . . . . .. 13-7 Setting up the system to collect history data . .. 14-4
File system connector for backup and recovery 13-7 Planning for history-data collection . . . .. 14-4
Third-party backup and recovery solutions Creating a history database . . . . . .. 14-5
support . . . . . . . . . . . . .. 13-8 Creating history configurations . . . . .. 14-5
Host backup and restore . . . . . . . .. 13-9 Managing access to a history database . . .. 14-9
Create a host backup . . . . . . . .. 13-10 Managing the collection of history data . . .. 14-9
Restore the host data directory and catalog 13-10 Changing the owner of a history database .. 14-9
Contents vii
The nzreclaim command . . . . . . . .. A-45 Host name and IP address changes . . . .. B-4
The nzrestore command . . . . . . . .. A-47 Rebooting the system . . . . . . . . .. B-4
The nzrev command . . . . . . . . .. A-47 Reformat the host disks . . . . . . . .. B-5
The nzsession command . . . . . . . .. A-49 Fix system errors . . . . . . . . . .. B-5
The nzspupart command . . . . . . . .. A-54 View system processes . . . . . . . .. B-5
The nzstart command . . . . . . . . .. A-56 Stop errant processes . . . . . . . . .. B-5
The nzstate command . . . . . . . . .. A-58 Change the system time . . . . . . . .. B-6
The nzstats command . . . . . . . . .. A-60 Determine the kernel release level . . . .. B-6
The nzstop command . . . . . . . . .. A-63 Linux system administration . . . . . . .. B-6
The nzsystem command . . . . . . . .. A-65 Display directories. . . . . . . . . .. B-7
The nzzonemapformat command . . . . .. A-68 Find files . . . . . . . . . . . . .. B-7
Customer service troubleshooting commands A-69 Display file content . . . . . . . . .. B-7
The nzconvertsyscase command. . . . .. A-70 Find Netezza hardware . . . . . . . .. B-7
The nzdumpschema command . . . . .. A-71 Time command execution . . . . . . .. B-8
The nzinitsystem command . . . . . .. A-73 Set default command line editing . . . . .. B-8
The nzlogmerge command . . . . . .. A-73 Miscellaneous commands . . . . . . .. B-8
This equipment was tested and found to comply with the limits for a Class A
digital device, according to Part 15 of the FCC Rules. These limits are designed to
provide reasonable protection against harmful interference when the equipment is
operated in a commercial environment. This equipment generates, uses, and can
radiate radio frequency energy and, if not installed and used in accordance with
the instruction manual, might cause harmful interference to radio communications.
Operation of this equipment in a residential area is likely to cause harmful
interference, in which case the user is required to correct the interference at their
own expense.
Properly shielded and grounded cables and connectors must be used to meet FCC
emission limits. IBM® is not responsible for any radio or television interference
caused by using other than recommended cables and connectors or by
unauthorized changes or modifications to this equipment. Unauthorized changes
or modifications might void the authority of the user to operate the equipment.
This device complies with Part 15 of the FCC Rules. Operation is subject to the
following two conditions: (1) this device might not cause harmful interference, and
(2) this device must accept any interference received, including interference that
might cause undesired operation.
Responsible manufacturer:
Dieses Gerät ist berechtigt, in Übereinstimmung mit dem Deutschen EMVG das
EG-Konformitätszeichen - CE - zu führen.
Verantwortlich für die Einhaltung der EMV Vorschriften ist der Hersteller:
IBM Deutschland
Technical Regulations, Department M456
IBM-Allee 1, 71137 Ehningen, Germany
Telephone: +49 7032 15-2937
Email: tjahn@de.ibm.com
This product is a Class A product based on the standard of the Voluntary Control
Council for Interference (VCCI). If this equipment is used in a domestic
environment, radio interference might occur, in which case the user might be
required to take corrective actions.
This is electromagnetic wave compatibility equipment for business (Type A). Sellers
and users need to pay attention to it. This is for any areas other than home.
Install the NPS® system in a restricted-access location. Ensure that only those
people trained to operate or service the equipment have physical access to it.
Install each AC power outlet near the NPS rack that plugs into it, and keep it
freely accessible.
The IBM PureData® System for Analytics appliance requires a readily accessible
power cutoff. This can be a Unit Emergency Power Off Switch (UEPO), a circuit
breaker or completely remove power from the equipment by disconnecting the
Appliance Coupler (line cord) from all rack PDUs.
CAUTION:
Disconnecting power from the appliance without first stopping the NPS
software and high availability processes might result in data loss and increased
service time to restart the appliance. For all non-emergency situations, follow the
documented power-down procedures in the IBM Netezza System Administrator’s
Guide to ensure that the software and databases are stopped correctly, in order, to
avoid data loss or file corruption.
High leakage current. Earth connection essential before connecting supply. Courant
de fuite élevé. Raccordement à la terre indispensable avant le raccordement au
réseau.
Homologation Statement
This product may not be certified in your country for connection by any means
whatsoever to interfaces of public telecommunications networks. Further
certification may be required by law prior to making any such connection. Contact
an IBM representative or reseller for any questions.
These topics are written for system administrators and database administrators. In
some customer environments, these roles can be the responsibility of one person or
several administrators.
You should be familiar with Netezza concepts and user interfaces, as described in
the IBM Netezza Getting Started Tips. Be comfortable with using command-line
interfaces, Linux operating system utilities, windows-based administration
interfaces, and installing software on client systems to access the Netezza
appliance.
Administrator’s roles
IBM Netezza administration tasks typically fall into two categories:
System administration
Managing the hardware, configuration settings, system status, access, disk
space, usage, upgrades, and other tasks
Database administration
Managing the user databases and their content, loading data, backing up
data, restoring data, controlling access to data and permissions
In some customer environments, one person can be both the system and database
administrator to do the tasks when needed. In other environments, multiple people
might share these responsibilities, or they might own specific tasks or
responsibilities. You can develop the administrative model that works best for your
environment.
In addition to the administrator roles, there are also database user roles. A database
user is someone who has access to one or more databases and has permission to
run queries on the data that is stored within those databases. In general, database
users have access permissions to one or more user databases, or to one or more
schemas within databases, and they have permission to do certain types of tasks
and to create or manage certain types of objects within those databases.
Administration tasks
The administration tasks generally fall into these categories:
v Service level planning
v Deploying and installing Netezza clients
v Managing a Netezza system
v Managing system notifications and events
v Managing Netezza users and groups
v Managing databases
v Loading data (described in the IBM Netezza Data Loading Guide)
v Backing up and restoring databases
v Collecting and evaluating history data
v Workload management
Netezza Support and Sales representatives work with you to install and initially
configure the Netezza in your customer environment. Typically, the initial rollout
consists of installing the system in your data center, and then configuring the
system host name and IP address to connect the system to your network and make
it accessible to users. They also work with you to do initial studies of the system
usage and query performance, and might advocate other configuration settings or
administration ideas to improve the performance of and access to the Netezza for
your users.
Related concepts:
“Linux users and groups required for HA” on page 4-17
The /nz directory is the top-level directory that contains the Netezza software
installation kits, data, and important information for the system and database. As a
best practice, use caution when you are viewing files in this directory or its
subfolders because unintended changes can impact the operation of the Netezza
system or cause data loss. Never delete or modify files or folders in the /nz
directory unless directed to do so by Netezza Support or an IBM representative.
Do not store large files, unrelated files, or backups in the /nz directory.
The system manager monitors the size of the /nz directory. If the /nz directory
reaches a configured usage percentage, the system manager stops the Netezza
software and logs a message in the sysmgr.log file. The default threshold is 95%,
which is specified by the value of the
sysmgr.hostFileSystemUsageThresholdToStopSystem registry setting. Do not
change the value of the registry setting unless directed to do so by Netezza
Support.
A sample sysmgr.log file message for a case where the /nz directory has reached
the configured 95% capacity threshold follows.
Error: File system /nz usage exceeded 95 threshold on rack1.host1 System will
be stopped
If the Netezza software stops and this message is in the sysmgr.log file, contact
Netezza Support for assistance to carefully review the contents of the /nz directory
and to delete appropriate files. When the /nz directory usage falls below the
configured threshold, you can start the Netezza software.
CAUTION:
If you need to change the host name or IP address information, do not use the
general Linux procedures to change this information. Contact Netezza Support
for assistance to ensure that the changes are using Netezza procedures to ensure
that the changes are propagated to the high availability configuration and
related services.
To change the DNS settings for your system, use the nzresolv service to manage
the DNS updates. The nzresolv service updates the resolv.conf information
stored on the Netezza host; for highly available Netezza systems, the nzresolv
service updates the information stored on both hosts. (You can log in to either host
to do the DNS updates.) You must be able to log in as the root user to update the
resolv.conf information; any Linux user such as nz can display the DNS
information by using the show option.
The Netezza system manages the DNS services as needed during actions such as
host failovers from the master host to the standby host. Never manually restart the
nzresolv service unless directed to do so by Netezza Support for troubleshooting.
A restart can cause loss of contact with the localhost DNS service, and
communication issues between the host and the system hardware components. Do
not use any of the nzresolv subcommands other than update, status, or show
unless directed to do so by Netezza Support.
To display the current DNS information for the system, do the following steps:
Procedure
1. Log in to the active host as a Linux user such as nz.
2. Enter the following command:
[nz@nzhost1 ~]$ service nzresolv show
Example
You update the DNS information by using the nzresolv service. You can change
the DNS information by using a text editor, and read the DNS information from a
file or enter it on the command line. Any changes that you make take effect
immediately (and on both hosts, for HA systems). The DNS server uses the
changes for the subsequent DNS lookup requests.
Procedure
1. Log in to either host as root.
2. Enter the following command:
[root@nzhost1 ~]# service nzresolv update
Note: If you use the service command to edit the DNS information, you must
use vi as the text editor tool, as shown in these examples. However, if you
prefer to use a different text editor, you can set the $EDITOR environment
variable and use the /etc/init.d/nzresolve update command to edit the files
using by your editor of choice.
3. Review the system DNS information as shown in the sample file.
CAUTION:
Use caution before you change the DNS information; incorrect changes can
affect the operation of the IBM Netezza system. Review any changes with
the DNS administrator at your site to ensure that the changes are correct.
To change the DNS information by reading the information from an existing text
file, do the following steps:
Procedure
1. Log in to either host as root.
2. Create a text file with your DNS information. Make your text file similar to the
following format:
search yourcompany.com
nameserver 1.2.3.4
nameserver 1.2.5.6
3. Enter the following command, where file is the fully qualified path name to
the text file:
[root@nzhost1 ~]# service nzresolv update file
To change the DNS information by entering the information from the command
prompt, do the following steps:
Procedure
1. Log in to either host as root.
2. Enter the following command (note the dash character at the end of the
command):
[root@nzhost1 ~]# service nzresolv update -
The command prompt proceeds to a new line where you can enter the DNS
information. Enter the complete DNS information because the text that you
type replaces the existing information in the resolv.conf file.
3. After you finish typing the DNS information, type one of the following
commands:
v Control-D to save the information that you entered and exit the editor.
v Control-C to exit without saving any changes.
To display the current status of the Netezza nzresolv service, do the following
steps:
Procedure
1. Log in to the active host as a Linux user such as nz.
2. Enter the following command:
[nz@nzhost1 ~]$ service nzresolv status
Example
If you log in to the standby host of the Netezza system and run the command, the
status message is Configured for upstream resolv.conf.
Remote access
IBM Netezza systems are typically installed in a data center, which is often highly
secured from user access and sometimes in a geographically separate location.
Thus, you might need to set up remote access to Netezza so that your users can
connect to the system through the corporate network. Common ways to remotely
log on to another system through a shell (Telnet, rlogin, or rsh) do not encrypt data
that is sent over the connection between the client and the server. Consequently,
the type of remote access you choose depends upon the security considerations at
your site. Telnet is the least secure and SSH (Secure Shell) is the most secure.
If you allow remote access through Telnet, rlogin, or rsh, you can more easily
manage this access through the xinetd daemon (Extended Internet Services). The
xinetd daemon starts programs that provide Internet services. This daemon uses a
configuration file, /etc/xinetd.conf, to specify services to start. Use this file to
enable or disable remote access services according to the policy at your site.
If you use SSH, it does not use xinetd, but rather its own configuration files. For
more information, see the Red Hat documentation.
Administration interfaces
IBM Netezza offers several ways or interfaces that you can use to perform the
various system and database management tasks:
v Netezza commands (nz* commands) are installed in the /nz/kit/bin directory
on the Netezza host. For many of the nz* commands, you must be able to log on
to the Netezza system to access and run those commands. In most cases, users
log in as the default nz user account, but you can create other Linux user
accounts on your system. Some commands require you to specify a database
user account, password, and database to ensure that you have permissions to do
the task.
v The Netezza CLI client kits package a subset of the nz* commands that can be
run from Windows and UNIX client systems. The client commands might also
The nz* commands are installed and available on the Netezza system, but it is
more common for users to install Netezza client applications on client
workstations. Netezza supports various Windows and UNIX client operating
systems. Chapter 2, “Netezza client software installation,” on page 2-1 describes
the Netezza clients and how to install them. Chapter 3, “Netezza administration
interfaces,” on page 3-1 describes how to get started by using the administration
interfaces.
The client interfaces provide you with different ways to do similar tasks. While
most users tend to use the nz* commands or SQL commands for tasks, you can
use any combination of the client interfaces, depending on the task or your
workstation environment, or interface preferences.
Related concepts:
Chapter 2, “Netezza client software installation,” on page 2-1
This section describes how to install the Netezza CLI clients and the NzAdmin
tool.
There are several Netezza documents that offer more specialized information about
features or tasks. For more information, see IBM Netezza Getting Started Tips.
In most cases, the only applications that IBM Netezza administrators or users must
install are the client applications to access the Netezza system. Netezza provides
client software that runs on various systems such as Windows, Linux, Solaris,
AIX®, and HP-UX systems.
The instructions to install and use the Netezza Performance Portal are in the IBM
Netezza Performance Portal User's Guide, which is available with the software kit for
that interface.
This section does not describe how to install the Netezza system software or how
to upgrade the Netezza host software. Typically, Netezza Support works with you
for any situations that might require software reinstallations, and the steps to
upgrade a Netezza system are described in the IBM Netezza Software Upgrade Guide.
If your users or their business reporting applications access the Netezza system
through ODBC, JDBC, or OLE-DB Provider APIs, see the IBM Netezza ODBC,
JDBC, OLE DB, and .NET Installation and Configuration Guide for detailed
instructions on the installation and setup of these data connectivity clients.
Related concepts:
“Administration interfaces” on page 1-8
The following table lists the supported operating systems and revisions for the
Netezza CLI clients.
Table 2-1. Netezza supported platforms
Operating system 32-bit 64-bit
Windows
Windows 2008, Vista, 7, 8 Intel / AMD Intel / AMD
Windows Server 2012, 2012 R2 N/A Intel / AMD
Linux
Red Hat Enterprise Linux 5.2, 5.3, 5.5, 5.9; and 6 Intel / AMD Intel / AMD
through 6.5
Red Hat Enterprise Linux 6.2+ N/A PowerPC®
Red Hat Enterprise Linux 7.1 N/A POWER8® LE mode
The Netezza client kits are designed to run on the proprietary hardware
architecture for the vendor. For example, the AIX, HP-UX, and Solaris clients are
intended for the proprietary RISC architecture. The Linux client is intended for
RedHat or SUSE on the 32-bit Intel architecture.
Note: Typically, the Netezza clients also support the update releases for each of the
OS versions listed in the table, unless the OS vendor introduced architecture
changes in the update.
If you are installing the clients on 64-bit operating systems, there are some
additional steps to install a second, 64-bit client package. The IBM Netezza clients
are 32-bit operating system executables and they require 32-bit libraries that are not
provided with the clients. If the libraries are not already installed on your system,
you must obtain and install the libraries using your operating system update
process.
Procedure
1. Obtain the nz-platformclient-version.archive) client package from the IBM
Fix Central site and download it to the client system. Use or create a new,
empty directory to reduce any confusion with other files or directories. There
are several client packages available for different common operating system
types, as described in “Client software packages” on page 2-1. Make sure that
Note: On an HP-UX 11i client, /bin/sh might not be available. You can use the
command form sh ./unpack to unpack the client.
The unpack command checks the client system to ensure that it supports the
CLI package and prompts you for an installation location. The default is
/usr/local/nz for Linux, but you can install the CLI tools to any location on
the client. The program prompts you to create the directory if it does not
already exist. Sample command output follows:
------------------------------------------------------------------
IBM Netezza -- NPS Linux Client 7.1
(C) Copyright IBM Corp. 2002, 2013 All Rights Reserved.
------------------------------------------------------------------
Validating package checksum ... ok
Where should the NPS Linux Client be unpacked? [/usr/local/nz]
Directory ’/usr/local/nz’ does not exist; create it (y/n)? [y] Enter
0% 25% 50% 75% 100%
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Unpacking complete.
5. If your client has a 64-bit operating system, change to the linux64 directory
and run the unpack command to install the additional 64-bit files: ./unpack.
The unpack command prompts you for an installation location. The default is
/usr/local/nz for Linux, but you should use the same location that you used
for the 32-bit CLI files in the previous step. Sample command output follows:
------------------------------------------------------------------
IBM Netezza -- NPS Linux Client 7.1
(C) Copyright IBM Corp. 2002, 2013 All Rights Reserved.
------------------------------------------------------------------
Validating package checksum ... ok
Where should the NPS Linux Client be unpacked? [/usr/local/nz]
Installing in an existing directory. Changing permissions to
overwrite existing files...
0% 25% 50% 75% 100%
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Unpacking complete.
Results
The client installation steps are complete, and the Netezza CLI commands are
installed to your specified destination directory. The NPS commands are located in
the bin directory where you unpacked the NPS clients. If you are using a 64-bit
operating system on your workstation, note that there is a 64-bit nzodbcsql
command in the bin64 directory for testing the SQL command connections.
Test to make sure that you can run the client commands. Change to the bin
subdirectory of the client installation directory (for example, /usr/local/nz/bin).
Run a sample command such as the nzds command to verify that the command
succeeds or to identify any errors.
./nzds -host nzhost -u user -pw password
The command displays a list of the data slices on the target NPS system. If the
command runs without error, your client system has the required libraries and
packages to support the Netezza clients. If the command fails with a library or
other error, the client may require some additional libraries or shared objects.
For example, on a Red Hat Enterprise Linux 64-bit client system, you could see an
error similar to the following:
[root@myrhsystem bin]# ./nzds
-bash: ./nzds: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
For example, on a SUSE 10/11 64-bit client system, you could see an error similar
to the following:
mylinux:/usr/local/nz/bin # ./nzds
./nzds: error while loading shared libraries: libssl.so.4: cannot open shared
object file: No such file or directory
These errors indicate that the client is missing 32-bit library files that are required
to run the NPS clients. Identify the packages that provide the library and obtain
those packages. You may need assistance from your local workstation IT
administrators to obtain the operating system packages for your workstation.
To identify and obtain the required Red Hat packages, you could use a process
similar to the following.
v Use the yum provides command and specify the file name to see which package
provides the file that could not be found (ld-linux.so.2 in this example).
yum provides ld-linux.so.2
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use
subscription-manager to register.
RHEL64 | 3.9 kB 00:00 ...
glibc-2.12-1.107.el6.i686 : The GNU libc libraries
Repo : RHEL64
Matched from:
Other : ld-linux.so.2
In this example, the missing package is glibc-2.12-1.107.el6.i686.
v In some cases, the NPS command could report an error for a missing libssl file.
You can use the yum provides command to obtain more information about the
packages that contain the library, and if any of the files already exist on your
workstation.
Based on the missing libraries and packages, use the following steps to obtain the
Red Hat packages.
v Mount the Red Hat distribution DVD or ISO file to the client system. Insert the
DVD into the DVD drive.
v Open a terminal window and log in as root.
v Run the following commands:
[root@myrhsystem]# mkdir /mnt/cdrom
[root@myrhsystem]# mount -o ro /dev/cdrom /mnt/cdrom
v Create the text file server.repo in the /etc/yum.repos.d directory.
To identify and obtain the required SUSE packages, you could use a process
similar to the following.
v Log in to the SUSE system as root or a superuser.
v If the test NPS command failed with the error that libssl.so.4 or
libcrypto.so.4 or both could not be found, you could be able to resolve the
issue by adding a symbolic link to the missing file from the NPS client
installation directory (for example, /usr/local/nz/lib). Use the ls /lib/libssl*
command to list the available libraries in the standard OS directories. You could
then create symbolic links to one of your existing libssl.so and libcrypto.so
files by using commands similar to the following:
If the error indicates that you are missing other libraries or packages, use the
following steps to obtain the SUSE packages.
v Open a terminal window and log in as root.
v Run the yast command to open the YaST interface.
v One the YaST Control Center, select Software and go to the software repositories
to configure and enable a DVD, a server, or an ISO file as a repository source.
Select the appropriate source for your SUSE environment. Consult with your IT
department about the policies for package updates in your environment.
v On the Software tab, go to Software Management and search for the required
package or library such as glibc-32bit in this example.
v Click Accept to install the required package.
v Exit YaST by clicking Quit.
To run the CLI commands on Solaris, you must include /usr/local/lib in your
environment variable LD_LIBRARY_PATH. Additionally, to use the ODBC driver on
Linux, Solaris, or HP-UX, you must include /usr/local/nz/lib, or the directory
path to nz/lib where you installed the Netezza CLI tools.
Related reference:
“Command locations” on page 3-3
To remove the client CLI kits from a UNIX system, complete the following steps:
Procedure
1. Change to the directory where you installed the clients. For example,
/usr/local/nz.
2. Delete the nz commands manually.
If you are using or viewing object names that use UTF-8 encoded characters, your
Windows client systems require the Microsoft universal font to display the
characters within the NzAdmin tool. The Arial Unicode MS font is installed by
default on some Windows systems, but you might have to run a manual
installation for other Windows platforms such as 2003 or others. For more
information, see the Microsoft support topic at http://office.microsoft.com/en-us/
help/hp052558401033.aspx.
To install the IBM Netezza tools on Windows, complete the following steps:
Procedure
1. Insert the IBM Netezza Client Components for Windows DVD in your media drive
and go to the admin directory. If you downloaded the client package
(nzsetup.exe) to a directory on your client system, change to that directory.
2. Double-click or run nzsetup.exe.
This program is a standard installation program that consists of a series of
steps in which you select and enter information that is used to configure the
installation. You can cancel the installation at any time.
Results
The installation program displays a license agreement, which you must accept to
install the client tools. You can also specify the following information:
Destination folder
You can use the default installation folder or specify an alternative
location. The default folder is C:\Program Files\IBM Netezza Tools. If you
choose a different folder, the installation program creates the folder if one
does not exist.
Setup type
Select the type of installation: typical, minimal, or custom.
Typical
Install the nzadmin program, the help file, the documentation, and
the console utilities, including the loader.
Minimal
Install the nzadmin program and help files.
Custom
Displays a screen where you can select to install any combination
of the administration application, console applications, or
documentation.
After you complete the selections and review the installation options, the client
installer creates the Netezza Tools folder, which has several subfolders. You cannot
change the subfolder names or locations.
The installer stores copies of the software licenses in the installation directory,
which is usually C:\Program Files\IBM Netezza Tools (unless you specified a
different location).
The installation program adds the Netezza commands to the Windows Start >
Programs menu. The program group is IBM Netezza and it has the suboptions
IBM Netezza Administrator and Documentation. The IBM Netezza Administrator
command starts the NzAdmin tool. The Documentation command lists the PDFs of
the installed documentation.
To use the commands in the bin directory, you must open a Windows
command-line prompt (a DOS prompt).
Environment variables
The following table lists the operating system environment variables that the
installation tool adds for the IBM Netezza console applications.
Table 2-2. Environment variables
Variable Operation Setting
PATH append <installation directory>\bin
NZ_DIR set Installation directory (for example C:\Program
Files\IBM Netezza Tools)
You can remove or uninstall the Windows tools by using the Windows Add or
Remove Programs interface in the Control Panel. The uninstallation program
removes all folders, files, menu commands, and environment variables. The
registry entries that are created by other IBM Netezza applications, however, are
not removed.
To remove the IBM Netezza tools from a Windows client, complete the following
steps:
Procedure
1. Click Start > Control Panel > Uninstall. The menu options can vary with each
Windows operating system type.
IBM Netezza commands that display object names such as nzload, nzbackup, and
nzsession can also display non-ASCII characters, but they must operate on a
UTF-8 terminal or DOS window to display characters correctly.
For UNIX clients, make sure that the terminal window in which you run these nz
commands uses a UTF-8 locale. The output in the terminal window might not
align correctly.
As an alternative to these DOS setup steps, the input/output from the DOS clients
can be piped from/to nzconvert and converted to a local code page, such as 932
for Japanese.
On a Windows system, the fonts that you use for your display must meet the
Microsoft requirements as outlined on the Support site at http://
support.microsoft.com/default.aspx?scid=kb;EN-US;Q247815.
After you define (or modify) these settings in the postgresql.conf file, you must
restart the Netezza software to apply the changes.
Netezza personnel, if granted access for remote service, use port 22 for SSH, and
ports 20 and 21 for FTP.
For security or port conflict reasons, you can change one or more default port
numbers for the IBM Netezza database access.
Important: Be careful when you are changing the port numbers for the Netezza
database access. Errors can severely affect the operation of the Netezza system. If
you are not familiar with editing resource shell files or changing environment
variables, contact Netezza Support for assistance.
Before you begin, make sure that you choose a port number that is not already in
use. To check the port number, you can review the /etc/services file to see
whether the port number is specified for another process. You can also use the
netstat | grep port command to see whether the designated port is in use.
To change the default port numbers for your Netezza system, complete the
following steps:
Procedure
1. Log in to the Netezza host as the nz user.
2. Change to the /nz/kit/sys/init directory.
3. Create a backup of the current nzinitrc.sh file:
[nz@nzhost init]$ cp nzinitrc.sh nzinitrc.sh.backup
4. Review the nzinitrc.sh file to see whether the Netezza port or ports that are
listed in Table 2-3 on page 2-10 that you want to change are present in the file.
For example, you might find a section that looks similar to the following, or
you might find that these variables are defined separately within the
nzinitrc.sh file.
# Application Port Numbers
# ------------------------
Tip: You can append the contents of the nzinitrc.sh.sample file to the
nzinitrc.sh file to create an editable section of variable definitions. You must
be able to log in to the Netezza host as the root user; then, change to the
/nz/kit/sys/init directory and run the following command:
[nz@nzhost init]$cat nzinitrc.sh.backup nzinitrc.sh.sample
>nzinitrc.sh
Some Netezza commands such as nzsql and nzload have a -port option that
allows the user to specify the DB access port. In addition, users can create local
definitions of the environment variables to specify the new port number.
For a Linux system, you can define a session-level variable by using a command
similar to the following format:
$ NZ_DBMS_PORT=5486; export NZ_DBMS_PORT
Encrypted passwords
Database user accounts must be authenticated during access requests to the IBM
Netezza database. For user accounts that use local authentication, Netezza stores
the password in encrypted form in the system catalog. For more information about
encrypting passwords on the host and the client, see the IBM Netezza Advanced
Security Administrator's Guide.
Local authentication requires a password for every account. If you use LDAP
authentication, a password is optional. During LDAP authentication, Netezza uses
the services of an LDAP server in your environment to validate and verify Netezza
database users.
v When you are using the Netezza CLI commands, the clear-text password must
be entered on the command line. You can set the environment variable
NZ_PASSWORD to avoid typing the password on the command line, but the
variable is stored in clear text with the other environment variables.
v To avoid displaying the password on the command line, in scripts, or in the
environment variables, you can use the nzpassword command to create a locally
stored encrypted password.
Where:
v The user name is the Netezza database user name in the Netezza system catalog.
If you do not specify the user name on the command line, the nzpassword
command uses the environment variable NZ_USER.
v The password is the Netezza database user password in the Netezza system
catalog or the password that is specified in the environment variable
NZ_PASSWORD. If you do not supply a password on the command line or in the
environment variable, the system prompts you for a password.
v The host name is the Netezza host. If you do not specify the host name on the
command line, the nzpassword command uses the environment variable NZ_HOST.
You can create encrypted passwords for any number of user name/host pairs.
When you use the nzpassword add command to cache the password, quotation
marks are not required around the user name or password values. You must only
qualify the user name or password with a surrounding set of single quotation
mark, double quotation mark pairs (for example, '"Bob"') if the value is
case-sensitive. If you specify quoted or unquoted names or passwords in
nzpassword or other nz commands, you must use the same quoting style in all
cases.
If you qualify a user name that is not case-sensitive with quotation marks (for
example '"netezza"'), the command might still complete successfully, but it might
not work in all command cases.
Stored passwords
If client users use the nzpassword command to store database user passwords on a
client system, they can supply only a database user name and host on the
command line. Users can also continue to enter a password on the command line
if displaying clear-text passwords is not a concern for security.
If you supply a password on the command line, it takes precedence over the
environment variable NZ_PASSWORD. If the environment variable is not set, the
system checks the locally stored password file. If there is no password in this file
and you are using the nzsql command, the system prompts you for a password,
otherwise the authentication request fails.
In all cases, using the -pw option on the command line, using the NZ_PASSWORD
environment variable, or using the locally stored password that is stored through
the nzpassword command. IBM Netezza compares the password against the entry
in the system catalog for local authentication or against the LDAP or KERBEROS
account definition. The authentication protocol is the same, and Netezza never
sends clear-text passwords over the network.
In release 6.0.x, the encryption that is used for locally encrypted passwords
changed. In previous releases, Netezza used the Blowfish encryption routines;
release 6.0 now uses the Advanced Encryption Standard AES-256 standard. When
you cache a password by using a release 6.0 client, the password is saved in
AES-256 format unless there is an existing password file in Blowfish format. In that
case, new stored passwords are saved in Blowfish format.
If you upgrade to a release 6.0.x or later client, the client can support passwords in
either the Blowfish format or the AES-256 format. If you want to convert your
existing password file to the AES-256 encryption format, you can use the
nzpassword resetkey command to update the file. If you want to convert your
password file from the AES-256 format to the Blowfish format, use the nzpassword
resetkey -none command.
Important: Older clients, such as those for release 5.0.x and those clients earlier
than release 4.6.6, do not support AES-256 format passwords. If your password file
is in AES-256 format, the older client commands prompt for a password, which can
For information about the Netezza Performance Portal, see the IBM Netezza
Performance Portal User's Guide, which is available with the software kit for that
interface.
In general, the Netezza CLI commands are used most often for the various
administration tasks. Many of the tasks can also be performed by using SQL
commands or the interactive interfaces. Throughout this publication, the primary
task descriptions use the CLI commands and reference other ways to do the same
task.
You can use Netezza CLI commands (also called nz commands) to monitor and
manage a Netezza system. Most nz* commands are issued on the Netezza host
system. Some are included with the Netezza client kits, and some are available in
optional support toolkits and other packages. This publication describes the host
and client nz commands.
Note: When investigating problems, Netezza support personnel might ask you to
issue other internal nz commands that are not listed.
Table 3-1. Command summary
Host or Client Kit Availability
Netezza Linux Solaris HP AIX Windows
Command Description Host Client Client Client Client Client
nzbackup Backs up an existing v
database.
nzcontents Displays the revision v
and build number of
all the executable files,
plus the checksum of
Netezza binaries.
nzconvert Converts character v v v v v v
encodings for loading
with the nzload
command or external
tables.
Command locations
The following table shows the default location of each CLI command and in which
of the host and client kits they are available:
Add the appropriate bin directory to your search path to simplify command
invocation.
Related concepts:
“Path for Netezza CLI client commands” on page 2-6
Command syntax
All IBM Netezza CLI commands have the following top-level syntax options:
For many Netezza CLI commands you can specify a timeout. This time is the
amount of time the system waits before it abandons the execution of the command.
If you specify a timeout without a value, the system waits 300 seconds. The
maximum timeout value is 100 million seconds.
Issuing commands
To issue an nz command, you must have access to the IBM Netezza system (either
directly on the Netezza KVM or through a remote shell connection) or you must
install the Netezza client kit on your workstation. If you are accessing the Netezza
system directly, you must be able to log in by using a Linux account (such as nz).
While some of the nz commands can operate and display information without
additional access requirements, some commands and operations require that you
specify a Netezza database user account and password. The account might also
require appropriate access and administrative permissions to display information
or process a command.
Note: In this example, you did not have to specify a host, user, or password.
The command displayed information that was available on the local Windows
client.
v To back up a Netezza database (you must run the command while logged in to
the Netezza system, as this is not supported from a client):
[nz@npshost ~]$ nzbackup -dir /home/user/backups -u user -pw
password -db db1
Backup of database db1 to backupset 20090116125409 completed
successfully.
Identifiers in commands
When you use the IBM Netezza commands and specify identifiers for users,
passwords, database names, and other objects, you can pass normal identifiers that
However, if you use delimited identifiers, the supported way to pass them on the
Linux command line is to use the following syntax:
’\’Identifier\’’
The syntax is single quotation mark, backslash, single quotation mark, identifier,
backslash, single quotation mark, single quotation mark. This syntax protects the
quotation marks so that the identifier remains quoted in the Netezza system.
Throughout this publication, SQL commands are shown in uppercase (for example,
CREATE USER) to stand out as SQL commands. The commands are not
case-sensitive and can be entered by using any letter casing. Users must have
Netezza database accounts and applicable object or administrative permissions to
do tasks. For detailed information about the SQL commands and how to use them
to do various administrative tasks, see the IBM Netezza Database User’s Guide.
The following table describes the nzsql command parameters. For more
information about the command parameters and how to use the command, see the
IBM Netezza Database User’s Guide.
Table 3-2. nzsql command parameters
Parameters Description
-a Echo all input from a script.
-A Use unaligned table output mode. This is equivalent to specifying
-P format=unaligned.
-c <query> Run only a single query (or slash command) and exit.
-d <dbname> Specify the name of the database to which to connect. If you do
or not specify this parameter, the nzsql command uses the value
-D <dbname> specified for the NZ_DATABASE environment variable (if it is
specified) or prompts you for a password (if it is not).
Within the nzsql command interpreter, enter the \h slash commands for help about
or to run a command:
Within the nzsql command interpreter, enter the following slash commands for
help about or to run a command:
\h List all SQL commands.
\h <command>
Display help about the specified SQL command.
\? List and display help about all slash commands.
Starting in NPS release 7.2.1, the nzsql command is included as part of the
Windows client kit. In a Windows environment, note that there are some
behavioral differences when users press the Enter key or the Control-C key
sequence than in a UNIX nzsql command line environment. The Windows
command prompt environment does not support many of the common UNIX
command formats and options. However, if your Windows client is using a Linux
environment like cygwin or others, the nzsql.exe command could support more of
the UNIX-only command line options noted in the documentation.
In a UNIX environment, if you are typing a multiline SQL query into the nzsql
command line shell, the Enter key acts as a newline character to accept input for
the query until you type the semi-colon character and press Enter. The shell
prompt also changes from => to -> for the subsequent lines of the input.
MYDB.SCH(USER)=> select count(*) (press Enter)
MYDB.SCH(USER)-> from ne_part (press Enter)
MYDB.SCH(USER)-> where p_retailprice < 950.00 (press Enter)
MYDB.SCH(USER)-> ; (press Enter)
COUNT
-------
1274
(1 row)
In a UNIX environment, if you press Control-C, the entire query is cancelled and
you return to the command prompt:
MYDB.SCH(USER)=> select count(*) (press Enter)
MYDB.SCH(USER)-> from ne_part (press Enter)
MYDB.SCH(USER)-> where p_retailprice < 950.00 (press Control-C)
MYDB.SCH(USER)=>
In a Windows client environment, if you are typing a multiline SQL query into the
nzsql command line shell, the Enter key acts similarly as a newline character to
accept input for the query until you type the semi-colon character and press Enter.
MYDB.SCH(USER)=> select count(*) (press Enter)
MYDB.SCH(USER)-> from ne_part (press Enter)
MYDB.SCH(USER)-> where p_retailprice < 950.00 (press Enter)
MYDB.SCH(USER)-> ; (press Enter)
COUNT
-------
1274
(1 row)
The Control-C (or a Control-Break) cancelled the WHERE clause on the third input
line, and thus the query results were larger without the restriction. In a single
input line (where the prompt is =>, note that Control-C cancels the query and you
return to the nzsql command prompt.
MYDB.SCH(USER)=> select count(*) from ne_part (press Control-C)
MYDB.SCH(USER)=>
When you run the nzsql command on a Windows client, you could see the error
more not recognized as an internal or external command. This error occurs
because nzsql uses the more command to process the query results. The error
indicates that the nzsql command could not locate the more command on your
Windows client.
To correct the problem, add the more.com command executable to your client
system's PATH environment variable. Each Windows OS version has a slightly
different way to modify the environment variables, so refer to your Windows
documentation for specific instructions. On a Windows 7 system, you could use a
process similar to the following:
v Click Start, and then type environment in the search field. In the search results,
click Edit the system environment variables. The System Properties dialog
opens and displays the Advanced tab.
v Click Environment variables. The Environment Variables dialog opens.
v In the System variables list, select the Path variable and click Edit. The Edit
System Variable dialog opens.
v Place the cursor at the end of the Variable value field. You can click anywhere in
the field and then press End to get to the end of the field.
v Append the value C:\Windows\System32; to the end of the Path field. Make
sure that you use a semi-colon character and type a space character at the end of
the string. If your system has the more.com file in a directory other than
C:Windows\System32, use the pathname that is applicable on your client.
v Click OK in the Edit System Variable dialog, then click OK in the Environment
Variables dialog, then click OK in the System Properties dialog.
After you make this change, the nzsql command should run without displaying
the more not recognized error.
On Windows clients, you can use the up-arrow key to display the commands that
ran previously.
By default, an nzsql batch session continues even if the system encounters errors.
You can control this behavior with the ON_ERROR_STOP variable, for example:
nzsql -v ON_ERROR_STOP=
You can also toggle batch processing with a SQL script. For example:
\set ON_ERROR_STOP
\unset ON_ERROR_STOP
You can use the $HOME/.nzsqlrc file to store values, such as the ON_ERROR_STOP,
and have it apply to all future nzsql sessions and all scripts.
The following table describes the slash commands that display information about
objects or privileges within the database, or within the schema if the system
supports multiple schemas.
Table 3-3. The nzsql slash commands
Command Description
\d <object> Describe the named object such as a table, view, or
sequence
\da[+] List user-defined aggregates. Specify + for more detailed
information.
\df[+] List user-defined functions. Specify + for more detailed
information.
\de List temp tables.
\dg List groups (both user and resource groups) except
_ADMIN_.
\dG List user groups and their members.
\dGr List resource groups to which at least one user has been
assigned, including _ADMIN_, and the users assigned to
them.
\di List indexes.
\dm List materialized views
\ds List sequences.
\dt List tables.
\dv List views.
\dx List external tables.
\dy List synonyms.
\dSi List system indexes.
\dSs List system sequences.
\dSt List system tables.
\dSv List system views.
\dMi List system management indexes.
\dMs List system management sequences.
\dMt List system management tables.
\dMv List system management views.
\dp <user> List the privileges that were granted to a user either
directly or by membership in a user group.
Note: Starting in Release 7.0.3, the nzsql environment prompt has changed. As
shown in the example command, the prompt now shows the database and schema
(mydb.myschema) to which you are connected. For systems that do not support
multiple schemas, there is only one schema that matches the name of the user who
created the database. For systems that support multiple schemas within a database,
the schema name will match the current schema for the connection.
To suppress the row count information, you can use the nzsql -r command when
you start the SQL command-line session. When you run a query, the output does
not show a row count:
mydb.myschema(myuser)=> select count(*) from nation;
COUNT
-------
25
You can use the NO_ROWCOUNT session variable to toggle the display of the row
count information within a session, as follows:
mydb.myschema(myuser)=> select count(*) from nation;
COUNT
-------
25
(1 row)
v Run the nzadmin.exe file from a command window. To bypass the login dialog,
enter the following login information:
– -host or /host and the name or IP address of the Netezza host.
– -user or /user and a Netezza user name. The name you specify can be
delimited. A delimited user name is contained in quotation marks.
– -pw or /pw and the password of the specified user. To specify that a saved
password is to be used, enter -pw without entering a password string.
You can specify these parameters in any order, but you must separate them by
spaces or commas. If you specify:
– All three parameters, NzAdmin bypasses the login dialog and connects you
to the host that you specify.
– Less than three parameters, NzAdmin displays the login dialog and prompts
you to complete the remaining fields.
When you log in to the NzAdmin tool you must specify the name of the host, your
user name, and your password. The drop-down list in the host field displays the
host addresses or names that you specified in the past. If you choose to save the
password on the local system, when you log in again, you need to enter only the
host and user names.
At the top of the navigation pane there are tabs that you can use to select the view
type:
System
The navigation pane displays components related to system hardware such
as SPA units, SPU units, and data slices.
Database
The navigation pane displays components related to database processing
such as databases, users, groups, and database sessions.
In the status bar at the bottom of the window, the NzAdmin tool displays your
user name and the duration (days, hours, and minutes) of the current NzAdmin
session or, if the host system is not online, a message indicating this.
You can access commands by using the menu bar or the toolbar, or by
right-clicking a object and using its pop-up menu.
For example, as you move the mouse pointer over the image of a SPA unit, a tool
tip displays the slot number, hardware ID, role, and state of each of the SPUs that
comprise it. Clicking a SPU displays the SPU status window and selects the
corresponding object in the tree view shown in the navigation pane.
Status indicators
Each component has a status indicator:
Table 3-4. Status indicators
Indicator Status Description
Normal The component is operating normally.
Failed The component is down, has failed, or is likely to fail. For example,
if two fans on the same SPA are down, the SPA is flagged as being
likely to fail.
Missing The component is missing and so no state information is available
for it.
Command Description
File > New Create a new database, table, view, materialized
view, sequence, synonym, user, or group. Available
only in the Database view.
File > System State Change the system state.
File > Reconnect Reconnect to the system with a different host name,
address, or user name.
File > Exit Exit the NzAdmin tool.
View > Toolbar Show or hide the toolbar.
View > Status Bar Show or hide the status bar.
View > System Objects Show or hide the system tables and views, and the
object privilege lists in the Object Privileges window.
View > SQL Statements Display the SQL window, which shows a subset of
the SQL commands run in this session.
View > Refresh Refresh the current view. This can be either the
System or Database view.
Tools > Workload Management Display workload management information:
Performance
Summary, history, and graph workload
management information.
Settings
The system defaults that determine the
limits on session timeout, row set, query
timeout. and session priority; and the
resource allocation that determines resource
usage among groups.
Tools > Table Skew Display any tables that meet or exceed a specified
skew threshold.
Tools > Table Storage Display table and materialized view storage usage
by database or by user.
Tools > Query History Display a window that you can use to create and
Configuration alter history configurations, and to set the current
configuration.
Tools > Default Settings Display the materialized view refresh threshold.
Tools > Options Display the Preferences tab where you can set the
object naming preferences and whether you want to
automatically refresh the NzAdmin window.
Help > NzAdmin Help Display the online help for the NzAdmin tool.
Help > About NzAdmin Display the NzAdmin and Netezza revision
numbers and copyright text.
Administration commands
You can access system and database administration commands from both the tree
view and the status pane of the NzAdmin tool. In either case, a pop-up menu lists
the commands that can be issued for the selected components.
v To activate a pop-up menu, right-click a component in a list.
You can manually refresh the current (System or Database) view by clicking the
refresh icon on the toolbar, or by choosing Refresh from a menu. In addition, you
can specify that both views are to be periodically automatically refreshed, and the
refresh interval. To do this:
Procedure
1. In the main menu, click Tools > Options
2. In the Preferences tab, enable automatic refresh and specify a refresh interval.
Results
The refresh interval you specify remains in effect until you change it.
To reduce communication with the server, the NzAdmin tool refreshes data based
on the item you select in the left pane. The following table lists the items and
corresponding data that is retrieved on refresh.
Table 3-5. Automatic refresh
Selected item Data retrieved
Server (system view): All topology and hardware state information.
v SPA Units
v SPA ID n
v SPU units
Event rules Event rules.
If the NzAdmin tool is busy communicating with the server (for example, if it is
processing a user command or doing a manual refresh), it does not perform an
automatic refresh.
They are supported by a large and active community for improvements and fixes,
and they also offer the flexibility for Netezza to add corrections or improvements
on a faster basis, without waiting for updates from third-party vendors.
All the Netezza models except the Netezza 100 are HA systems, which means that
they have two host servers for managing Netezza operations. The host server
(often called host within the publication) is a Linux server that runs the Netezza
software and utilities.
Distributed Replicated Block Device (DRBD) is a block device driver that mirrors
the content of block devices (hard disks, partitions, and logical volumes) between
the hosts. Netezza uses the DRBD replication only on the /nz and /export/home
partitions. As new data is written to the /nz partition and the /export/home
partition on the primary host, the DRBD software automatically makes the same
changes to the /nz and /export/home partition of the standby host.
For details about DRBD and its terms and operations, see the documentation at
http://www.drbd.org.
Note: The /nzdata and /shrres file systems on the MSA500 are deprecated.
v In some customer environments that used the previous cluster manager solution,
it was possible to have only the active host running while the secondary was
powered off. If problems occurred on the active host, the Netezza administrator
on-site would power off the active host and power on the standby. In the new
Linux-HA DRBD solution, both HA hosts must be operational at all times.
DRBD ensures that the data saved on both hosts is synchronized, and when
Heartbeat detects problems on the active host, the software automatically fails
over to the standby with no manual intervention.
Related concepts:
“Logging and messages” on page 4-12
Linux-HA administration
When you start an IBM Netezza HA system, Heartbeat automatically starts on both
hosts. It can take a few minutes for Heartbeat to start all the members of the nps
resource group. You can use the crm_mon command from either host to observe the
status, as described in “Cluster and resource group status” on page 4-5.
CAUTION:
Do not modify the file unless directed to in Netezza documentation or by
Netezza Support.
CAUTION:
Never manually edit the CIB file. You must use cibadmin (or crm_resource) to
modify the Heartbeat configuration. Wrapper scripts like heartbeat_admin.sh
update the file safely.
Note: It is possible to get into a situation where Heartbeat does not start properly
because of a manual CIB modification. The CIB cannot be safely modified if
Heartbeat is not started (that is, cibadmin cannot run). In this situation, you can
run /nzlocal/scripts/heartbeat_config.sh to reset the CIB and /etc/ha.d/ha.cf
to factory-default status. After you do this, it is necessary to run
/nzlocal/scripts/heartbeat_admin.sh --enable-nps to complete the CIB
configuration.
However, when host 1 is the active host, certain system-level operations such as
S-Blade restarts and system reboots often complete more quickly than when host
2/HA2 is the active host. An S-Blade restart can take one to two minutes longer to
complete when host 2 is the active host. Certain tasks such as manufacturing and
system configuration scripts can require host 1 to be the active host, and they
display an error if run on host 2 as the active host. The documentation for these
commands indicates whether they require host 1 to be the active host, or if special
steps are required when host 2 is the active host.
You can change the settings by editing the values in ha.cf on both hosts and
restarting Heartbeat, but use care when you are editing the file.
The following table lists the common commands. These commands are listed here
for reference.
Table 4-2. Cluster management scripts
Type Scripts
Initial installation heartbeat_config.sh sets up Heartbeat for the first time
scripts
heartbeat_admin.sh --enable-nps adds Netezza services to
cluster control after initial installation
Host name change heartbeat_admin.sh --change-hostname
Fabric IP change heartbeat_admin.sh --change-fabric-ip
Wall IP change heartbeat_admin.sh --change-wall-ip
Manual migrate heartbeat_admin.sh --migrate
(relocate)
Linux-HA status and crm_mon monitors cluster status
troubleshooting
commands crm_verify sanity checks configuration, and prints status
The following is a list of other Linux-HA commands available. This list is also
provided as a reference, but do not use any of these commands unless directed to
by Netezza documentation or by Netezza Support.
The command output displays a message about how it was started, and then
displays the host name where the nps resource group is running. The host that
runs the nps resource group is the active host.
You can obtain more information about the state of the cluster and which host is
active by using the crm_mon command. See the sample output that is shown in
“Cluster and resource group status.”
If the nps resource group is unable to start, or if it has been manually stopped
(such as by crm_resource -r nps -p target_role -v stopped), neither host is
considered the active host and the crm_resource -r nps -W command does not
return a host name.
Sample output follows. This command refreshes its display every 5 seconds, but
you can specify a different refresh rate (for example, -i10 is a 10-second refresh
rate). Press Control-C to exit the command.
The host that is running the nps resource group is the active host. Every member
of the nps resource group starts on the same host. The sample output shows that
they are all running on nzhost1.
The crm_mon output also shows the name of the Current Designated Coordinator
(DC). The DC host is not an indication of the active host. The DC is an
automatically assigned role that Linux-HA uses to identify a node that acts as a
coordinator when the cluster is in a healthy state. This is a Linux-HA
implementation detail and does not affect Netezza. Each host recognizes and
recovers from failure, regardless of which one is the DC. For more information
about the DC and Linux-HA implementation details, see http://www.linux-
ha.org/DesignatedCoordinator.
The fence routes for internal Heartbeat use are not part of the nps resource group.
If these services are started, it means that failovers are possible:
fencing_route_to_ha1 (stonith:apcmaster): Started nzhost2
fencing_route_to_ha2 (stonith:apcmaster): Started nzhost1
The order of the members of the group matters; group members are started
sequentially from first to last. They are stopped sequentially in reverse order, from
last to first. Heartbeat does not attempt to start the next group member until the
previous member starts successfully. If any member of the resource group is unable
to start (returns an error or times out), Heartbeat performs a failover to the
standby node.
Failover criteria
During a failover or resource migration, the nps resource group is stopped on the
active host and started on the standby host. The standby host then becomes the
active host.
Note: If any of these resource group members experiences a failure, Heartbeat first
tries to restart or repair the process locally. The failover is triggered only if that
Note: In the previous Netezza Cluster Manager solution, HA1 is the name of the
primary node, and HA2 the secondary node. In Linux-HA/DRBD, either host can
be primary; thus, these procedures call one host as the active host and one as the
standby host.
To relocate the nps resource group from the active host to the standby host:
[root@nzhost1 ~]# /nzlocal/scripts/heartbeat_admin.sh --migrate
Testing DRBD communication channel...Done.
Checking DRBD state...Done.
The command blocks until the nps resource group stops completely. To monitor
the status, use the crm_mon -i5 command. You can run the command on either
host, although on the active host you must run it from a different terminal
window.
In general, you should not have to stop Heartbeat unless the IBM Netezza HA
system requires hardware or software maintenance or troubleshooting. During
these times, it is important that you control Heartbeat to ensure that it does not
interfere with your work by taking STONITH actions to regain control of the hosts.
The recommended practice is to shut down Heartbeat completely for service.
To shut down the nps resource group and Heartbeat, complete the following steps:
Procedure
1. Identify which node is the active node by using the following command:
Procedure
1. While logged in to either host as root, display the name of the active node:
[root@nzhost1 ~]# crm_resource -r nps -W
resource nps is running on: nzhost1
2. As root, stop Heartbeat on the standby node (nzhost2 in this example):
[root@nzhost2 ~]# service heartbeat stop
3. As root, stop Heartbeat on the active node:
[root@nzhost1 ~]# service heartbeat stop
4. As root, make sure that there are no open nz sessions or any open files in the
shared directories /nz, /export/home, or both. For details, see “Checking for
user sessions and activity” on page 4-18.
[root@nzhost1 ~]# lsof /nz /export/home
5. Run the following script in /nzlocal/scripts to make the IBM Netezza system
ready for non-clustered operations. The command prompts you for a
confirmation to continue, shown as Enter in the output.
[root@nzhost1 ~]# /nzlocal/scripts/nz.non-heartbeat.sh
---------------------------------------------------------------
Thu Jan 7 15:13:27 EST 2010
File systems and eth2 on this host are okay. Going on.
File systems and eth2 on other host are okay. Going on.
This script will configure Host 1 or 2 to own the shared disks and
own the fabric.
Running nz_dnsmasq: [ OK ]
nz_dnsmasq started.
To reinstate the cluster from a maintenance mode, complete the following steps:
Procedure
1. Stop the IBM Netezza software by using the nzstop command.
2. Make sure that Heartbeat is not running on either node. Use the service
heartbeat stop command to stop the Heartbeat on either host if it is running.
3. Make sure that there are no nz user login sessions, and make sure that no users
are in the /nz or /export/home directories. Otherwise, the nz.heartbeat.sh
command is not able to unmount the DRBD partitions. For details, see
“Checking for user sessions and activity” on page 4-18.
4. Run the following script in /nzlocal/scripts to make the Netezza system
ready for clustered operations. The command prompts you for a confirmation
to continue, shown as Enter in the output.
[root@nzhost1 ~]# /nzlocal/scripts/nz.heartbeat.sh
---------------------------------------------------------------
Thu Jan 7 15:14:32 EST 2010
You can configure the Cluster Manager to send events when a failover is caused by
any of the following events:
v Node shutdown
v Node restart
v Node fencing actions (STONITH actions)
Procedure
1. Log in to the active host as the root user.
2. Using a text editor, edit the /nzlocal/maillist file as follows. Add the lines
that are shown in bold.
#
#Email notification list for the cluster manager problems
#
#Enter email addresses of mail recipients under the TO entry, one
to a line
#
#Enter email address of from email address (if a non-default is
desired)
DRBD administration
DRBD provides replicated storage of the data in managed partitions (that is, /nz
and /export/home). When a write occurs to one of these locations, the write action
occurs at both the local node and the peer standby node. Both perform the same
write to keep the data in synchronization. The peer responds to the active node
when finished, and if the local write operation is also successfully finished, the
active node reports the write as complete.
The DRBD software can be started, stopped, and monitored by using the
/sbin/service drbd start/stop/status command (as root):
While you can use the status command as needed, only stop and start the DRBD
processes during routine maintenance procedures or when directed by IBM
Netezza Support. Do not stop the DRBD processes on an active, properly working
Netezza HA host to avoid the risk of split-brain.
Related tasks:
“Detecting split-brain” on page 4-14
Sample output of the commands follows. These examples assume that you are
running the commands on the primary (active) IBM Netezza host. If you run them
from the standby host, the output shows the secondary status first, then the
primary.
[root@nzhost1 ~]# service drbd status
drbd driver loaded OK; device status:
version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by root@nps22094, 2009-06-09
16:25:53
m:res cs st ds p mounted fstype
0:r1 Connected Primary/Secondary UpToDate/UpToDate C /export/home ext3
1:r0 Connected Primary/Secondary UpToDate/UpToDate C /nz ext3
[root@nzhost1 ~]# cat /proc/drbd
version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by root@nps22094, 2009-06-09
16:25:53
0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r---
ns:15068 nr:1032 dw:16100 dr:3529 al:22 bm:37 lo:0 pe:0 ua:0 ap:0 oos:0
1: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r---
ns:66084648 nr:130552 dw:66215200 dr:3052965 al:23975 bm:650 lo:0 pe:0 ua:0 ap:0 oos:0
In the sample output, the DRBD states are one of the following values:
Primary/Secondary
The "healthy" state for DRBD. One device is Primary and one is Secondary.
Secondary/Secondary
DRBD is in a suspended or waiting mode. This usually occurs at boot time
or when the nps resource group is stopped.
Primary/Unknown
One node is available and healthy, the other node is either down or the
cable is not connected.
Secondary/Unknown
This is a rare case where one node is in standby, the other is either down
or the cable is not connected, and DRBD cannot declare a node as the
primary/active node. If the other host also shows this status, the problem
is most likely in the connection between the hosts. Contact Netezza
Support for assistance in troubleshooting this case.
The DRBD status when the current node is active and the standby node is down:
m:res cs st ds p mounted fstype
0:r1 WFConnection Primary/Unknown UpToDate/DUnknown C /export/home ext3
1:r0 WFConnection Primary/Unknown UpToDate/DUnknown C /nz ext3
Detecting split-brain
About this task
Split-brain is an error state that occurs when the images of data on each IBM
Netezza host are different. It typically occurs when synchronization is disabled and
users change data independently on each Netezza host. As a result, the two
Netezza host images are different, and it becomes difficult to resolve what the
latest, correct image should be.
Important: Split-brain does not occur if clustering is enabled. The fencing controls
prevent users from changing the replicated data on the standby node. Allow DRBD
management to be controlled by Heartbeat to avoid the split-brain problems.
Procedure
1. Look for Split in /var/log/messages, usually on the host that you are trying to
make the primary/active host. Let DRBD detect this condition.
2. Because split-brain results from running both images as primary Netezza hosts
without synchronization, check the Netezza logs on both hosts. For example,
check the pg.log files on both hosts to see when/if updates occur. If there is an
overlap in times, both images have different information.
3. Identify which host image, if either, is the correct image. In some cases, neither
host image might be fully correct. You must choose the image that is the more
correct. The host that has the image which you decide is correct is the
“survivor”, and the other host is the “victim”.
4. Perform the following procedure:
a. Log in to the victim host as root and run these commands:
drbdadm secondary resource
drbdadm disconnect resource
drbdadm -- --discard-my-data connect resource
Note: The connect command might display an error that instructs you to
run drbdadm disconnect first.
5. Check the status of the fix by using drbdadm primary resource and the service
drbd status command. Make sure that you run drbdadm secondary resource
before you start Heartbeat.
Related concepts:
“DRBD administration” on page 4-12
IP address requirements
The following table is an example block of the eight IP addresses that are
recommended for a customer to reserve for an HA system:
Table 4-3. HA IP addresses
Entity Sample IP address
HA1 172.16.103.209
HA1 Host Management 172.16.103.210
Floating IP 172.16.103.212
In the IP addressing scheme, there are two host IPs, two host management IPs, and
the floating IP, which is HA1 + 3.
You must run this command twice. Then, try to stop Heartbeat again by using
service heartbeat stop. This process might not stop all of the resources that
Heartbeat manages, such as /nz mount, drbd devices, nzbootpd, and other
resources.
You can specify one or more V characters. The more Vs that you specify, the more
verbose the output. Specify at least four or five Vs and increase the number as
needed. You can specify up to 12 Vs, but that large a number is not recommended.
For example, if the fencing route to ha1 is listed as failed on host1, use the
crm_resource -r fencing_route_to_ha1 -C -H host1 command.
Output from crm_mon does not show the nps resource group
If the log messages indicate that the nps resource group cannot run anywhere, the
cause is that Heartbeat tried to run the resource group on both HA1 and HA2, but
it failed in both cases. Search in /var/log/messages on each host to find this first
failure. Search from the bottom of the log for the message cannot run anywhere
and then scan upward in the log to find the service failures. You must fix the
problems that caused a service to fail to start before you can successfully start the
cluster.
After you fix the failure case, you must restart Heartbeat following the instructions
in “Transitioning from maintenance to clustering mode” on page 4-10.
Do not modify or remove the user or groups because those changes will impact
Heartbeat and disrupt HA operations on the Netezza system.
Related concepts:
“Initial system setup and information” on page 1-1
Open nz user sessions and nz user activity can cause the procedures to stop
Heartbeat and to return to clustering to fail. Use the nzsession command to see
whether there are active database sessions in progress. For example:
[nz@nzhost1 ~]$ nzsession -u admin -pw password
ID Type User Start Time PID Database State Priority
Name Client IP Client PID Command
----- ---- ----- ----------------------- ----- -------- ------
------------- --------- ---------- ------------------------
16748 sql ADMIN 14-Jan-10, 08:56:56 EST 4500 CUST active normal
127.0.0.1 4499 create table test_2
16753 sql ADMIN 14-Jan-10, 09:12:36 EST 7748 INV active normal
127.0.0.1 7747 create table test_s
16948 sql ADMIN 14-Jan-10, 10:14:32 EST 21098 SYSTEM active normal
127.0.0.1 21097 SELECT session_id, clien
The sample output shows three sessions: the last entry is the session that is created
to generate the results for the nzsession command. The first two entries are user
activity. Wait for those sessions to complete or stop them before you use the
nz.heartbeat.sh or nz.non-heartbeat.sh commands.
To check for connections to the /export/home and /nz directory, complete the
following steps:
Procedure
1. As the nz user on the active host, stop the IBM Netezza software:
[nz@nzhost1 ~]$ /nz/kit/bin/nzstop
2. Log out of the nz account and return to the root account; then use the lsof
command to list any open files that are in /nz or /export/home.
Results
This example shows several open files in the /export/home directory. If necessary,
you can close open files by issuing a command such as kill and supplying the
process ID (PID) shown in the second column from the left. Use caution with the
kill command; if you are not familiar with Linux system commands, contact
Support or your Linux system administrator for assistance.
The Netezza appliance uses SNMP events (described in Chapter 8, “Event rules,”
on page 8-1) and status indicators to send notifications of any hardware failures.
Most hardware components are redundant; thus, a failure typically means that the
remaining hardware components assume the work of the component that failed.
The system might or might not be operating in a degraded state, depending on the
component that failed.
CAUTION:
Never run the system in a degraded state for a long time. It is imperative to
replace a failed component in a timely manner so that the system returns to an
optimal topology and best performance.
Netezza Support and Field Service work with you to replace failed components to
ensure that the system returns to full service as quickly as possible. Most of the
system components require Field Service support to replace. Components such as
disks can be replaced by customer administrators.
The following figure shows some sample output of the nzhw show command:
Legend:
1 Hardware type
2 Hardware ID
3 Hardware role
4 Hardware state
5 Security
For an IBM Netezza High Capacity Appliance C1000 system, the output of the
nzhw show command: includes information about the storage groups:
Related reference:
“The nzhw command” on page A-28
Use the nzhw command to manage the hardware of the IBM Netezza system.
Hardware types
Each hardware component of the IBM Netezza system has a type that identifies the
hardware component.
The following table describes the hardware types. You see these types when you
run the nzhw command or display hardware by using the NzAdmin or IBM
Netezza Performance Portal UIs.
Table 5-2. Hardware description types
Description Comments
Rack A hardware rack for the Netezza system
SPA Snippet processing array (SPA)
SPU Snippet processing unit (SPU)
Disk enclosure A disk enclosure chassis, which contains the disk devices
Disk A storage disk, contains the user databases and tables
Fan A thermal cooling device for the system
Blower A fan pack used within the S-Blade chassis for thermal cooling
Power supply A power supply for an enclosure (SPU chassis or disk)
MM A management device for the associated unit (SPU chassis, disk
enclosure). These devices include the AMM and ESM components, or a
RAID controller for an intelligent storage enclosure in a Netezza C1000
system.
Store group A group of three disk enclosures within a Netezza C1000 system
managed by redundant hardware RAID controllers
Ethernet switch Ethernet switch (for internal network traffic on the system)
Host A high availability (HA) host on the Netezza appliance
SAS Controller A SAS controller within the Netezza HA hosts
Hardware IDs
Each hardware component has a unique hardware identifier (ID) that is in the
form of an integer, such as 1000, 1001, or 1014. You can use the hardware ID to
manage a specific hardware component, or to uniquely identify which component
in command output or other informational displays.
Hardware location
IBM Netezza uses two formats to describe the position of a hardware component
within a rack.
v The logical location is a string in a dot format that describes the position of a
hardware component within the Netezza rack. For example, the nzhw output that
is shown in Figure 5-1 on page 5-3 shows the logical location for components; a
Disk component description follows:
Disk 1609 spa1.diskEncl1.disk13 Active Ok Enabled
In this example, the location of the disk is in SPA 1, disk enclosure one, disk
position one.
Similarly, the disk location for a disk on a system shows the location including
storage group:
Disk 1029 spa1.storeGrp1.diskEncl2.disk5 Active Ok
v The physical location is a text string that describes the location of a component.
You can display the physical location of a component by using the nzhw locate
command. For example, to display the physical location of disk ID 1011:
[nz@nzhost ~]$ nzhw locate -id 1011
Turned locator LED ’ON’ for Disk: Logical
Name:’spa1.diskEncl4.disk1’ Physical Location:’1st Rack, 4th
DiskEnclosure, Disk in Row 1/Column 1’
As shown in the command output, the nzhw locate command also lights the
locator LED for components such as SPUs, disks, and disk enclosures. For
hardware components that do not have LEDs, the command displays the
physical location string.
The following figure shows an IBM PureData System for Analytics N200x-010
system with a closer view of the storage arrays and SPU chassis components and
locations.
A Each IBM PureData System for Analytics N200x rack is one array of disk
enclosures. There are 12 enclosures in a full rack configuration, and IBM
PureData System for Analytics N200x-005 half racks have 6 enclosures.
Each disk enclosure has 24 disks, numbered 1 to 24 from left to right on
the front of the rack.
B SPU1 occupies slots 1 and 2. SPU3 occupies slots 3 and 4, up to SPU13
which occupies slots 13 and 14
C The disk enclosures
D Host 1, host 2, and a KVM
E SPU chassis
The following figure shows an IBM Netezza 1000-12 system or an IBM PureData
System for Analytics N1001-010 with a closer view of the storage arrays and SPU
chassis components and locations.
A Each disk array has four disk enclosures. Each enclosure has 12 disks,
numbered as in the chart shown in the figure.
B SPU1 occupies slots 1 and 2. SPU3 occupies slots 3 and 4, up to SPU11
which occupies slots 11 and 12
C Disk array 1 with four enclosures.
D Disk array 2 with four enclosures.
E Host 1, host 2, and a KVM
F SPU chassis 1
G SPU chassis 2
For detailed information about the locations of various components in the front
and back of the system racks, see the Site Preparation and Specifications guide for
your model type.
The following figure shows an IBM PureData System for Analytics N3001-001
system with host and disk numbering.
A The host marked in the figure is HA1. It is always placed in the rack
directly above HA2.
B The first disk in the host occupies the slot labeled as 0, the second one
occupies slot 1, and, following this pattern, the last disk resides in slot 23.
Sample output of the nzhw locate command on this system looks like the
following:
[nz@v10-12-h1 ~]$ nzhw locate -id 1011
Hardware roles
Each hardware component of the IBM Netezza system has a hardware role, which
represents how the hardware is being used. The following table describes the
hardware roles. You see these roles when you run the nzhw command or display
hardware status by using the NzAdmin or IBM Netezza Performance Portal UIs.
Table 5-3. Hardware roles
Role Description Comments
None The None role indicates that the hardware All active SPUs must be
is initialized, but it has yet to be discovered discovered before the system
by the Netezza system. This process can make the transition from
usually occurs during system startup the Discovery state to the
before any of the SPUs send their discovery Initializing state.
information.
Active The hardware component is an active Normal system state
system participant. Failing over this device
can impact the Netezza system.
Hardware states
The state of a hardware component represents the power status of the hardware.
Each hardware component has a state.
You see these states when you run the nzhw command or display hardware status
by using the NzAdmin or IBM Netezza Performance Portal UIs.
Table 5-4. Hardware states
State Description Comments
None The None state indicates that the All active SPUs must be
hardware is initialized, but it has yet to discovered before the system can
be discovered by the IBM Netezza make the transition from the
system. This process usually occurs Discovery state to the Initializing
during system startup before any of the state. If any active SPUs are still
SPUs have sent their discovery in the booting state, there can be
information. an issue with the hardware
startup.
Ok The Netezza system has received the Normal state
discovery information for this device,
and it is working properly.
Down The device is turned off.
Invalid
Online The system is running normally. It can
service requests.
Missing The System Manager detects a new This typically occurs when a disk
device in a slot that was previously or SPU has been removed and
occupied but not deleted. replaced with a spare without
deleting the old device. The old
device is considered absent
because the System Manager
cannot find it within the system.
Unreachable The System Manager cannot The device may have been failed
communicate with a previously or physically removed from the
discovered device. system.
Critical The management module detects a Contact Netezza Support to
critical hardware problem, and the obtain help with identifying and
problem component amber service light troubleshooting the cause of the
might be illuminated. critical alarm.
Warning The system manager has detected a Contact Netezza Support to
condition that requires investigation. troubleshoot the warning
For example, a host disk may have condition and to determine
reported a predictive failure error whether a proactive replacement
(PFE), which indicates that the disk is is needed.
reporting internal errors.
Checking The system manager is checking or These are normal states for new
Firmware updating the firmware of a disk before replacement disks that are being
it can be brought online as a spare. checked and updated before they
Updating
are added to service.
Firmware
Unsupported The hardware component is not a Contact Netezza Support because
supported model for the appliance. the replacement part is not
supported on the appliance.
The System Manager also monitors the management modules (MMs) in the
system, which have a status view of all the blades in the system. As a result, you
might see messages similar to the following in the sysmgr.log file:
Disks
A disk is a physical drive on which data resides. In a Netezza system, host servers
have several disks that hold the Netezza software, host operating system, database
metadata, and sometimes small user files. The Netezza system also has many more
disks that hold the user databases and tables. Each disk has a unique hardware ID
to identify it.
For the IBM PureData System for Analytics N200x appliances, 24 disks reside in
each disk enclosure, and full rack models have 12 enclosures per rack for a total of
288 disks per rack.
For IBM Netezza 1000 or IBM PureData System for Analytics N1001 systems, 48
disks reside in one storage array; a full-rack system has two storage arrays for a
total of 96 disks.
For IBM PureData System for Analytics N3001-001 appliances, all disks are located
on two hosts. 16 out of 24 disks on each host are used for storing data slices.
Data slices
A data slice is a logical representation of the data that is saved on a disk. The data
slice contains “pieces” of each user database and table. When users create tables
and load their data, they distribute the data for the table across the data slices in
the system by using a distribution key. An optimal distribution is one where each
data slice has approximately the same amount of each user table as any other. The
Netezza system distributes the user data to all of the data slices in the system by
using a hashing algorithm.
Data partitions
Each SPU in an IBM Netezza system "owns" a set of data partitions where the user
data is stored. For the IBM Netezza 100, IBM Netezza 1000, and IBM PureData
System for Analytics N1001 systems, each SPU owns eight data partitions which
are numbered from 0 to 7. For IBM PureData System for Analytics N200x systems,
each SPU typically owns 40 data partitions which are numbered 0 through 39.
For SPU ID 1003, its first data partition (0) points to data slice ID 9, which is stored
on disk 1070. Each data partition points to a data slice. As an example, assume that
disk 1014 fails and its contents are regenerated to a spare disk ID 1024. In this
situation, the SPU 1003’s data partition 7, which previously pointed to data slice 16
on disk 1014, is updated to point to data slice 16 on the new disk 1024 (not
shown).
If a SPU fails, the system moves all its data slices to the remaining active SPUs for
management. The system moves them in pairs (the pair of disks that contain the
primary and mirror data slices of each other). In this situation, some SPUs that
normally had 8 partitions will now own 10 data partitions. You can use the nzds
command to review the data slices on the system and the SPUs that manage them.
The intelligent storage controller contains two redundant RAID controllers that
manage the disks and associated hardware within a storage group. The RAID
controllers are caching devices, which improves the performance of the read and
write operations to the disks. The caches are mirrored between the two RAID
controllers for redundancy; each controller has a flash backup device and a battery
to protect the cache against power loss.
The RAID controllers operate independently of the Netezza software and hosts.
For example, if you stop the Netezza software (such as for an upgrade or other
maintenance tasks), the RAID controllers continue to run and manage the disks
within their storage group. It is common to see the activity LEDS on the storage
groups operating even when the Netezza system is stopped. If a disk fails, the
Chapter 5. Manage the Netezza hardware 5-13
RAID controller initiates the recovery and regeneration process; the regeneration
continues to run even when the Netezza software is stopped. If you use the nzhw
command to activate, fail, or otherwise manage disks manually, the RAID
controllers ensure that the action is allowed at that time; in some cases, commands
return an error when the requested operation, such as a disk failover, is not
allowed.
The RAID controller caches are disabled when any of the following conditions
occur:
v Battery failure
v Cache backup device failure
v Peer RAID controller failure (that is, a loss of the mirrored cache)
When the cache is disabled, the storage group (and the Netezza system)
experiences a performance degradation until the condition is resolved and the
cache is enabled again.
The following figure shows an illustration of the SPU/storage mapping. Each SPU
in a Netezza C1000 system owns nine user data slices by default. Each data slice is
supported by a three disk RAID 5 storage array. The RAID 5 array can support a
single disk failure within the three-disk array. (More than one disk failure within
the three-disk array results in the loss of the data slice.) Seven disks within the
storage group in a RAID 5 array are used to hold important system information
such as the nzlocal, swap and log partition.
A SPU
B Data slice 1
C Data slice 9
D nzlocal, swap, and log partitions
If a SPU fails, the system manager distributes the user data partitions and the
nzlocal and log partitions to the other active SPUs in the same SPU chassis. A
Each disk partition is used to store one copy of a data slice. Disks are divided into
groups of four with two disks from each host in such a group. In each group, there
are 16 disk partitions (four on every disk) that are used to store data slices with
four copies of every data slice.
Each of the data slices always uses disk partition 1, 2, 3, 4 from the disks in the
group.
Each host runs one virtual SPU. The data slice is owned by the virtual SPU that
runs on the host where the disk with the first disk partition of that SPU is
physically located.
Data mirroring for these disk partitions is handled on the software level by the
virtual SPU as RAID0 with 4 parties.
Remote disks are accessed using iSCSI using the network that connects two hosts.
In addition, there are 4 spare disks (2 per host) used as a target for the regen
operation of failed disks.
One-host mode
When one of the hosts is not available, manually failed, or its SPU is manually
failed using nzhw, the system switches into one-host mode.
In this mode, only one virtual SPU is working and only two disks from each disk
group are used.
Each data slice is now stored on two disk partitions instead of four, and two data
slices must read data from the same disk.
For example, the default disk topology for IBM Netezza 100/1000 or IBM PureData
System for Analytics N1001 systems configures each S-Blade with eight disks that
are evenly distributed across the disk enclosures of its SPA, as shown in the
following figure. If disks failover and regenerate to spares, it is possible to have an
unbalanced topology where the disks are not evenly distributed among the
odd-numbered and even-numbered enclosures. This causes some of the SAS (also
called HBA) paths, which are shown as the dark lines that connect the blade
chassis to the disk enclosures, to carry more traffic than the others.
The System Manager can detect and respond to disk topology issues. For example,
if an S-Blade has more disks in the odd-numbered enclosures of its array, the
System Manager reports the problem as an overloaded SAS bus. You can use the
nzhw rebalance command to reconfigure the topology so that half of the disks are
in the odd-numbered enclosures and half in the even-numbered. The rebalance
process requires the system to transition to the “pausing now” state for the
topology update.
When the Netezza system restarts, the restart process checks for topology issues
such as overloaded SAS buses or SPAs that have S-Blades with uneven shares of
data slices. If the system detects a spare S-Blade for example, it will reconfigure the
data slice topology to distribute the workload equally among the S-Blades.
Related reference:
“Hardware path down” on page 8-20
“Rebalance data slices” on page 5-29
For example, the following command shows two failed disks on the system:
[nz@nzhost ~]$ nzhw show -issues
Description HW ID Location Role State Security
----------- ----- ---------------------- -------- ----------- --------
Disk 1498 spa1.diskEncl11.disk21 Failed Ok Disabled
Disk 1526 spa1.diskEncl9.disk4 Failed Ok Disabled
The disks must be replaced to ensure that the system has spares and an optimal
topology. You can also use the NzAdmin andIBM Netezza Performance Portal
interfaces to obtain visibility to hardware issues and failures.
Manage hosts
In general, there are few management tasks that relate to the IBM Netezza hosts. In
most cases, the tasks are for the optimal operation of the host. For example:
v Do not change or customize the kernel or operating system files unless directed
to do so by Netezza Support or Netezza customer documentation. Changes to
the kernel or operating system files can impact the performance of the host.
v Do not install third-party software on the Netezza host without first testing the
impact on a development or test Netezza system. While management agents or
other applications might be of interest, it is important to test and verify that the
application does not impact the performance or operation of the Netezza system.
v During Netezza software upgrades, host and kernel software revisions are
verified to ensure that the host software is operating with the latest required
levels. The upgrade processes might display messages that inform you to update
the host software to obtain the latest performance and security features.
v On Netezza HA systems, Netezza uses DRBD replication only on the /nz and
/export/home partitions. As new data is written to the Netezza /nz partition and
the /export/home partition on the primary Netezza system, the DRBD software
automatically makes the same changes to the /nz and /export/home partition of
the standby Netezza system.
If the active host fails, the Netezza HA software typically fails over to the standby
host to run the Netezza database and system. Netezza Support works with you to
schedule field service to repair the failed host.
For the N3001-001 appliance, this process is similar. If the active host is
unreachable, the NPS services automatically fail over to the second host. It may
take 15 minutes for NPS to start discovering its SPUs. Next, the discovery process
waits up to 15 minutes for both SPUs to report their status. After that time, if only
the local SPU reports status, the system transitions into one-host mode. If the
second host becomes unreachable for more than 15 minutes, the active host
transitions into one-host mode.
Model N3001-001
For the N3001-001 appliance, both hosts are by default used for running the virtual
SPUs. Resources of both hosts, such as CPU or memory, are in use and none of the
hosts is marked as spare in nzhw.
You can switch to the one-host mode in which the resources of only one host are in
use. To do this, run the following command:
nzhw failover -id XXXX
where XXXX is the hwid of the host that you do not want to use. It is only
possible to fail over a host that is a standby in the cluster.
When the system runs in one-host mode, the role of the other host and its virtual
SPU is Failed and disks located on that host that are normally used to store data
(disks 9 - 24) have the role Inactive. The data slices remain mirrored but only with
two disks. In the two-host mode, each data slice is backed up by four disks.
To switch back from one-host mode to two-host mode, run the following
command:
nzhw activate -id XXXX
where XXXX is the hwid of the failed host. This operation activates the host, its
SPU, and all of its disks. Then, a rebalance is requested.
Note: Switching from one-host mode to two-host mode may take a significant
amount of time, for example a few hours. It depends on the amount of data stored
in the system.
Manage SPUs
Snippet Processing Units (SPUs) or S-Blades are hardware components that serve
as the query processing engines of the IBM Netezza appliance.
In model N3001-001, the SPUs are emulated using host resources, such as CPU and
memory. The SPUs are not physical components of the system and there is no
FPGA.
You can use the nzhw command to activate, deactivate, failover, locate, and reset a
SPU, or delete SPU information from the system catalog.
To indicate which SPU you want to control, you can refer to the SPU by using its
hardware ID. You can use the nzhw command to display the IDs, and obtain the
information from management UIs such as NzAdmin or IBM Netezza Performance
Portal.
To obtain the status of one or more SPUs, you can use the nzhw command with the
show options.
Activate a SPU
You can use the nzhw command to activate a SPU that is inactive or failed.
To activate a SPU:
nzhw activate -u admin -pw password -host nzhost -id 1004
For model N3001-001, if you have enabled the one-host mode by failing over a
SPU, you must activate that SPU to switch back to two-host mode. You must then
request a rebalance operation using nzds. In such case, switching to two-host mode
may take a significant amount of time, for example a few hours. It depends on the
amount of data stored in the system.
You can use the nzhw command to make a spare SPU unavailable to the system. If
the specified SPU is active, the command displays an error.
For model N3001-001, when a SPU is failed over, the system switches into one-host
mode in which the resources of only one host are used. You can only fail over a
SPU of the standby host. In order to fail over a SPU that is running on the active
host, you must first migrate the cluster to the other host. To switch back to
two-host mode, activate the failed SPU.
Locate a SPU
You can use the nzhw command to turn on or off a SPU LED and display the
physical location of the SPU. The default is on.
For model N3001-001, the SPUs are emulated and the output of the locate
command is the following:
Logical Name:’spa1.spu2’ Physical Location:’lower host, virtual SPU’
Reset a SPU
You can use the nzhw command to power cycle a SPU (a hard reset).
You can use the nzhw command to remove a failed, inactive, or incompatible SPU
from the system catalog.
If a SPU hardware component fails and must be replaced, Netezza Support works
with you to schedule service to replace the SPU.
Related reference:
“The nzhw command” on page A-28
Use the nzhw command to manage the hardware of the IBM Netezza system.
Manage disks
The disks on the system store the user databases and tables that are managed and
queried by the IBM Netezza appliance. You can use the nzhw command to activate,
failover, and locate a disk, or delete disk information from the system catalog.
To protect against data loss, never remove a disk from an enclosure or remove a
RAID controller or ESM card from its enclosure unless directed to do so by
Netezza Support or when you are using the hardware replacement procedure
documentation. If you remove an Active or Spare disk drive, you could cause the
system to restart or to transition to the down state. Data loss and system issues can
occur if you remove these components when it is not safe to do so.
Netezza C1000 systems have RAID controllers to manage the disks and hardware
in the storage groups. You cannot deactivate a disk on a C1000 system, and the
commands to activate, fail, or delete a disk return an error if the storage group
cannot support the action at that time.
To indicate which disk you want to control, you can refer to the disk by using its
hardware ID. You can use the nzhw command to display the IDs, and obtain the
information from management UIs such as NzAdmin or IBM Netezza Performance
Portal.
For model IBM PureData System for Analytics N3001-001, the physical disks are
represented as two nzhw objects:
v A disk in an emulated enclosure in SPA (like for other N3001 systems).
v A host disk.
The majority of disk management operations should be performed on the storage
array disks, not on the host disk. The only operation that must be run on the host
disk is activation. This operation is required to assign a newly inserted physical
disk to the virtual SPU.
To obtain the status of one or more disks, you can use the nzhw command with the
show options.
To show the status of all the disks (the sample output is abbreviated for the
documentation), enter:
[nz@nzhost ~]$ nzhw show -type disk
Description HW ID Location Role State Security
----------- ----- ---------------------- ------ ----------- --------
Disk 1076 spa1.diskEncl4.disk2 Active Ok Enabled
Disk 1077 spa1.diskEncl4.disk3 Active Ok Enabled
Disk 1078 spa1.diskEncl4.disk4 Active Ok Enabled
Disk 1079 spa1.diskEncl4.disk5 Active Ok Enabled
Activate a disk
You can use the nzhw command to make an inactive, failed, or mismatched disk
available to the system as a spare.
In some cases, the system might display a message that it cannot activate the disk
yet because the SPU has not finished an existing activation request. Disk activation
usually occurs quickly, unless there are several activations that are taking place at
the same time. In this case, later activations wait until they are processed in turn.
Note: For a Netezza C1000 system, you cannot activate a disk that is being used
by the RAID controller for a regeneration or other task. If the disk cannot be
activated, an error message similar to the following appears:Error: Can not
update role of Disk 1004 to Spare - The disk is still part of a non healthy
array. Please wait for the array to become healthy before activating.
You can use the nzhw command to initiate a failover. You cannot fail over a disk
until the system is at least in the initialized state.
Note: For a Netezza C1000 system, the RAID controller still considers a failed disk
to be part of the array until the regeneration is complete. After the regen
completes, the failed disk is logically removed from the array.
Locate a disk
You can use the nzhw command to turn on or off the LED on a disk in the storage
arrays. (This command does not work for disks in the hosts.) The default is on.
The command also displays the physical location of the disk.
For model N3001-001, you can locate both disks and host disks, including the host
disks managed by the hardware RAID controller.
You can use the nzhw command to remove a disk that is failed, inactive,
mismatched, or incompatible from the system catalog. For Netezza C1000 systems,
do not delete the hardware ID of a failed disk until after you have successfully
replaced it using the instructions in the Replacement Procedures: IBM Netezza C1000
Systems.
If a disk hardware component fails and must be replaced, Netezza Support works
with you to schedule service to replace the disk.
Related reference:
“The nzhw command” on page A-28
Use the nzhw command to manage the hardware of the IBM Netezza system.
You can use the nzhw, nzds, and nzspupart commands to manage data slices. To
indicate which data slice you want to control, you can refer to the data slice by
using its data slice ID. You can use the nzds command to display the IDs, and
obtain the information from management UIs such as NzAdmin or IBM Netezza
Performance Portal.
Related reference:
“The nzds command” on page A-10
Use the nzds command to manage and obtain information about the data slices in
the system.
You can also use the NzAdmin and IBM Netezza Performance Portal interfaces to
obtain visibility to hardware issues and failures.
To show the status of all the data slices (the sample output is abbreviated for the
documentation), enter:
[nz@nzhost ~]$ nzds show
Data Slice Status SPU Partition Size (GiB) % Used Supporting Disks
---------- ------- ---- --------- ---------- ------ ----------------
1 Repairing 1017 2 356 58.54 1021,1029
2 Repairing 1017 3 356 58.54 1021,1029
3 Healthy 1017 5 356 58.53 1022,1030
4 Healthy 1017 4 356 58.53 1022,1030
5 Healthy 1017 0 356 58.53 1023,1031
6 Healthy 1017 1 356 58.53 1023,1031
7 Healthy 1017 7 356 58.53 1024,1032
8 Healthy 1017 6 356 58.53 1024,1032
Data slices 1 and 2 in the sample output is regenerating due to a disk failure. The
command output could be different on different models of appliances.
Note: For a Netezza C1000 system, three disks hold the user data for a data slice;
the fourth disk is the regen target for the failed drive. The RAID controller still
considers a failed disk to be part of the array until the regeneration is complete.
After the regen completes, the failed disk is logically removed from the array.
To show detailed information about the data slices that are being regenerated, you
can use the -regenstatus and -detail options, for example:
[nz@nzhost ~]$ nzds show -regenstatus -detail
Data Slice Status SPU Partition Size (GiB) % Used Supporting Disks
Start Time % Done
---------- --------- ---- --------- ---------- ------ -------------------
------------------- ------
2 Repairing 1255 1 3725 0.00 1012,1028,1031,1056
2011-07-01 10:41:44 23
The status of a data slice shows the health of the data slice. The following table
describes the status values for a data slice. You see these states when you run the
nzds command or display data slices by using the NzAdmin or IBM Netezza
Performance Portal UIs.
Table 5-5. Data slice status
State Description
Healthy The data slice is operating normally and the data is protected in a
redundant configuration; that is, the data is fully mirrored.
Repairing The data slice is in the process of being regenerated to a spare disk
because of a disk failure.
Degraded The data slice is not protected in a redundant configuration. Another
disk failure could result in loss of a data slice, and the degraded
condition impacts system performance.
Note: In the IBM PureData System for Analytics N1001 or IBM Netezza 1000 and
later models, the system does not change states during a regeneration; that is, the
system remains online while the regeneration is in progress. There is no
synchronization state change and no interruption to active jobs during this process.
If the regeneration process fails or stops for any reason, the system transitions to
the Discovering state to establish the topology of the system.
You can use the nzspupart regen command or the NzAdmin interface to
regenerate a disk. If you do not specify any options, the system manager checks
for any degraded partitions and if found, starts a regeneration if there is a spare
disk in the system.
For IBM PureData System for Analytics N2001 and later systems, each disk
contains partitions for the user data and the log and swap partitions. When the
system regenerates a disk to a spare, the system copies all of the partitions to the
spare. If you issue the nzspupart regen command manually, specify:
v The hardware ID of the SPU that has the degraded partitions
v One of the partition IDs
v The hardware ID for the spare disk
The regeneration affects all partitions on that disk. For example:
nz@nzhost ~]$ nzspupart regen -spu 1099 -part 1 -dest 1066
You can then issue the nzspupart show -regenstatus or the nzds show
-regenstatus command to display the status of the regeneration. For example:
[nz@nzhost ~]$ nzspupart -regenstatus
SPU Partition Id Partition Type Status Size (GiB) % Used Supporting Disks % Done Repairing Disks Starttime
---- ------------ -------------- --------- ---------- ------ ------------------- ------- --------------- ---------
1099 0 Data Repairing 356 0.00 1065,1066 0.00 1066 0
1099 1 Data Repairing 356 0.00 1065,1066 0.00 1066 0
1099 100 NzLocal Repairing 1920989772 0.00 1065,1066,1076,1087 0.00 1066 0
1099 101 Swap Repairing 32 0.00 1065,1066,1076,1087 0.00 1066 0
1099 110 Log Repairing 1 3.31 1065,1066 0.00 1066 0
For systems earlier than the N200x models, you have to specify the data slice IDs
and spare disk ID. For example, to regenerate dataslice IDs 11 and 17 affected by
the failing disk and regenerate them on spare disk ID 1024, enter:
nzds regen -u admin -pw password -ds "11,17" -dest 1024
If you want to control the regeneration source and target destinations, you can
specify source SPU and partition IDs, and the target or destination disk ID. The
spare disk must reside in the same SPA as the disk that you are regenerating. You
can obtain the IDs for the source partition by issuing the nzspupart show -details
command.
To regenerate a degraded partition and specify the information for the source and
destination, enter the following command:
nzspupart regen -spu 1035 -part 7 -dest 1024
Note: Regeneration can take several hours to complete. If the system is idle and
has no other activity except the regeneration, or if the user data partitions are not
very full, the regeneration takes less time to complete. You can review the status of
the regeneration by issuing the nzspupart show -regenStatus command. During
the regeneration, user query performance can be impacted while the system is
busy processing the regeneration. Likewise, user query activity can increase the
time that is required for the regeneration.
If the system manager is unable to remove the failed disk from the RAID array, or
if it cannot add the spare disk to the RAID array, a regeneration setup failure can
occur. If a regeneration failure occurs, or if a spare disk is not available for the
regeneration, the system continues processing jobs. The data slices that lose their
mirror continue to operate in an unmirrored or degraded state; however, you
should replace your spare disks as soon as possible and ensure that all data slices
are mirrored. If an unmirrored disk fails, the system is brought to a down state.
After the failed SPU is replaced or reactivated, you must rebalance the data slices
to return to optimal performance. The rebalance process checks each SPU in the
SPA; if a SPU has more than two data slices more than another SPU, the System
Manager redistributes the data slices to equalize the workload and return the SPA
to an optimal performance topology. (The System Manager changes the system to
the discovering state to perform the rebalance.)
You can use the nzhw command to rebalance the data slice topology. The system
also runs the rebalance check each time the that system is restarted, or after a SPU
failover or a disk regeneration setup failure.
You can also use the nzhw rebalance -check option to have the system check the
topology and only report whether a rebalance is needed. The command displays
the message Rebalance is needed or There is nothing to rebalance. If a
rebalance is needed, you can run the nzhw rebalance command to perform the
rebalance, or you could wait until the next time the Netezza software is stopped
and restarted to rebalance the system.
For a N3001-001 system, the rebalance operation is used for switching the system
back to two-host mode after activating the failed SPU. Rebalance is automatically
requested by the system when the transition to two-host mode is requested by the
activate operation for a host.
Related concepts:
“System resource balance recovery” on page 5-17
To display the current storage topology, use the nzds show -topology command:
Switch 1
port[1] 5 disks: [ 3:encl1Slot01 5:encl1Slot03 9:encl1Slot05 13:encl1Slot07
17:encl1Slot12 ] -> encl1
This sample output shows a normal topology for an IBM Netezza 1000-3 system.
The command output is complex and is typically used by Netezza Support to
troubleshoot problems. If there are any issues to investigate in the topology, the
command displays a WARNING section at the bottom, for example:
WARNING: 2 issues detected
spu0101 hba [0] port [2] has 3 disks
SPA 1 SAS switch [sassw01a] port [3] has 7 disks
These warnings indicate problems in the path topology where storage components
are overloaded. These problems can affect query performance and also system
availability if other path failures occur. Contact Support to troubleshoot these
warnings.
To display detailed information about path failure problems, you can use the
following command:
[nz@nzhost ~]$ nzpush -a mpath -issues
spu0109: Encl: 4 Slot: 4 DM: dm-5 HWID: 1093 SN: number PathCnt: 1
PrefPath: yes
spu0107: Encl: 2 Slot: 8 DM: dm-1 HWID: 1055 SN: number PathCnt: 1
PrefPath: yes
spu0111: Encl: 1 Slot: 10 DM: dm-0 HWID: 1036 SN: number PathCnt: 1
PrefPath: no
Note: It is possible to see errors that are reported in the nzpush command output
even if the nzds -topology command does not report any warnings. In these cases,
the errors are still problems in the topology, but they do not affect the performance
and availability of the current topology. Be sure to report any path failures to
ensure that problems are diagnosed and resolved by Support for optimal system
performance.
Related reference:
“Hardware path down” on page 8-20
If a SPU fails, the system state changes to the pausing -now state (which stops
active jobs), and then transitions to the discovering state to identify the active SPUs
in the SPA. The system also rebalances the data slices to the active SPUs.
The following table describes the system states and the way IBM Netezza handles
transactions during failover.
Table 5-6. System states and transactions
System state Active transactions New transactions
Offline(ing) Now Aborts all transactions. Returns an error.
Offline(ing) Waits for the transaction to finish. Returns an error.
Pause(ing) Now Aborts only those transactions that Queues the transaction.
cannot be restarted.
Pause(ing) Waits for the transaction to finish. Queues the transaction.
The following examples provide specific instances of how the system handles
failovers that happen before, during, or after data is returned.
v If the pause -now occurs immediately after a BEGIN command completes, before
data is returned, the transaction is restarted when the system returns to an
online state.
v If a statement such as the following completes and then the system transitions,
the transaction can restart because data has not been modified and the reboot
does not interrupt a transaction.
BEGIN;
SELECT * FROM emp;
Note: There is a retry count for each transaction. If the system transitions to
pause -now more than the number of retries that are allowed, the transaction is
stopped.
After the system restarts these transactions, the system state returns to online. For
more information, see the IBM Netezza Data Loading Guide.
Power procedures
This section describes how to power on an IBM Netezza system and how to
power-off the system. Typically, you would only power off the system if you are
moving the system physically within the data center, or in the event of possible
maintenance or emergency conditions within the data center.
The instructions to power on or off an IBM Netezza 100 system are available in the
Site Preparation and Specifications: IBM Netezza 100 Systems.
Note: To power cycle a Netezza system, you must have physical access to the
system to press power switches and to connect or disconnect cables. Netezza
systems have keyboard/video/mouse (KVM) units that you can use to enter
administrative commands on the hosts.
Figure 5-11. IBM Netezza 1000-6 and N1001-005 and larger PDUs and circuit breakers
A OFF setting
B ON setting
C PDU circuit breakers. 3 rows of 3 breaker pins.
v To close the circuit breakers (power up the PDUs), press in each of the nine
breaker pins until they engage. Be sure to close the nine pins on both main
PDUs in each rack of the system.
v To open the circuit breakers (power off the PDUs), pull out each of the nine
breaker pins on the left and the right PDU in the rack. If it becomes difficult to
pull out the breaker pins by using your fingers, you can use a tool such as a
pair of needle-nose pliers to gently pull out the pins.
On the IBM Netezza 1000-3 and IBM PureData System for Analytics N1001-002
models, the main input power distribution units (PDUs) are on the right and left
sides of the rack, as shown in the following figure.
At the top of each PDU is a pair of breaker rocker switches. The labels on the
switches are upside down when you view the PDUs.
v To close the circuit breakers (power up the PDUs), you push the On toggle of
the rocker switch in. Make sure that you push in all four rocker switches, two
on each PDU.
v To open the circuit breakers (power off the PDUs), you must use a tool such as a
small flathead screwdriver; insert the tool into the hole that is labeled OFF and
gently press until the rocker toggle pops out. Make sure that you open all four
of the rocker toggles, two on each PDU.
To power on an IBM Netezza 1000 or IBM PureData System for Analytics N1001
system, complete the following steps:
Procedure
1. Make sure that the two main power cables are connected to the data center
drops; there are two power cables for each rack of the system.
2. Do one of the following steps depending on which system model you have:
To power off an IBM Netezza 1000 or IBM PureData System for Analytics N1001
system, complete the following steps:
Procedure
1. Log in to the host server (ha1) as root.
To power on an IBM PureData System for Analytics N200x system, complete the
following steps:
Procedure
1. Switch on the power to the two PDUs located in the rear of the cabinet at the
bottom. Make sure that you switch on both power controls. Repeat this steps
for each rack of a multi-rack system.
2. Press the power button on Host 1. The power button is on the host in the front
of the cabinet. Host 1 is the upper host in the rack, or the host located in rack
one of older multi-rack systems. A series of messages appears as the host
system boots.
3. Wait at least 30 seconds after powering up Host 1, then press the power button
on Host 2. (Host 2 is the lower host in the rack, or the host located in rack two
of older multi-rack systems.) The delay ensures that Host 1 completes its
start-up operations first, and thus is the primary host for the system.
4. Log in as root to Host 1 and run the crm_mon command to monitor the status of
the HA services and cluster operations:
[root@nzhost1 ~]# crm_mon -i5
The output of the command refreshes at the specified interval rate of 5 seconds
(-i5).
5. Review the output and watch for the resource groups to all have a Started
status. This usually takes about 2 to 3 minutes, then proceed to the next step.
Sample output follows:
[root@nzhost1 ~]# crm_mon -i5
============
Last updated: Tue Jun 2 11:46:43 2009
Current DC: nzhost1 (key)
2 Nodes configured.
3 Resources configured.
============
Node: nzhost1 (key): online
Node: nzhost2 (key): online
Resource Group: nps
drbd_exphome_device (heartbeat:drbddisk): Started nzhost1
drbd_nz_device (heartbeat:drbddisk): Started nzhost1
exphome_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
nz_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
fabric_ip (heartbeat::ocf:IPaddr): Started nzhost1
wall_ip (heartbeat::ocf:IPaddr): Started nzhost1
nz_dnsmasq (lsb:nz_dnsmasq): Started nzhost1
nzinit (lsb:nzinit): Started nzhost1
fencing_route_to_ha1 (stonith:apcmaster): Started nzhost2
fencing_route_to_ha2 (stonith:apcmaster): Started nzhost1
6. Press Ctrl-C to exit the crm_mon command and return to the command prompt.
7. Log in to the nz account.
[root@nzhost1 ~]# su - nz
8. Verify that the system is online using the following command:
[nz@nzhost1 ~]$ nzstate
System state is ’Online’.
9. If your system runs the Call Home support feature, enable it.
[nz@nzhost1 ~]$ nzOpenPmr --on
To power off an IBM PureData System for Analytics N200x system, complete the
following steps:
Procedure
1. Log in to the host server (ha1) as root.
Procedure
1. Make sure that the two main power cables are connected to the data center
drops; there are two power cables for each rack of the system. For a North
American power configuration, there are four power cables for the first two
racks of a Netezza C1000 (or two cables for a European Union power
configuration);
2. Switch the breakers to ON on both the left and right PDUs. (Repeat these
steps for each rack of the system.)
3. Press the power button on both host servers and wait for the servers to start.
This process can take a few minutes.
4. Log in to the host server (ha1) as root.
5. Change to the nz user account and run the following command to stop the
Netezza server: nzstop
6. Wait for the Netezza system to stop.
7. Log out of the nz account to return to the root account, then type the
following command to power on the storage groups:
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -on all -j all
8. Wait five minutes and then type the following command to power on all the
S-blade chassis:
[root@nzhost1 ~]# /nzlocal/scripts/rpc/spapwr.sh -on all
9. Run the crm_mon -i5 command to monitor the status of the HA services and
cluster operations. Review the output and watch for the resource groups to all
have a Started status. This usually takes about 2 to 3 minutes, then proceed to
the next step.
[root@nzhost1 ~]# crm_mon -i5
============
Last updated: Tue Jun 2 11:46:43 2009
Current DC: nzhost1 (key)
2 Nodes configured.
3 Resources configured.
============
Node: nzhost1 (key): online
Node: nzhost2 (key): online
Resource Group: nps
drbd_exphome_device (heartbeat:drbddisk): Started nzhost1
drbd_nz_device (heartbeat:drbddisk): Started nzhost1
exphome_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
nz_filesystem (heartbeat::ocf:Filesystem): Started nzhost1
fabric_ip (heartbeat::ocf:IPaddr): Started nzhost1
wall_ip (heartbeat::ocf:IPaddr): Started nzhost1
nz_dnsmasq (lsb:nz_dnsmasq): Started nzhost1
nzinit (lsb:nzinit): Started nzhost1
fencing_route_to_ha1 (stonith:apcmaster): Started nzhost2
fencing_route_to_ha2 (stonith:apcmaster): Started nzhost1
10. Press Ctrl-C to exit the crm_mon command and return to the command prompt.
11. Log into the nz account.
[root@nzhost1 ~]# su - nz
12. Verify that the system is online using the following command:
To power off an IBM Netezza High Capacity Appliance C1000, complete the
following steps:
CAUTION:
Unless the system shutdown is an emergency situation, do not power down a
Netezza C1000 system when there are any amber (Needs Attention) LEDs
illuminated in the storage groups. It is highly recommended that you resolve the
problems that are causing the Needs Attention LEDs before you power off a
system to ensure that the power-up procedures are not impacted by the
unresolved conditions within the groups.
Procedure
1. Identify the active host in the cluster, which is the host where the nps resource
group is running:
[root@nzhost1 ~]# crm_resource -r nps -W
Procedure
1. Press the power button on Host 1. The power button is located on the host in
the front of the cabinet. Host 1 is the upper host in a single-rack system, or the
host located in rack one of a multi-rack system. A series of messages appears as
the host system boots.
2. Wait for at least 30 seconds after powering up Host 1. Then press the power
button of Host 2. The delay ensures that Host 1 completes its startup
operations first and therefore becomes the primary host for the system. Host 2
is the lower host in a single-rack system, or the host located in rack two of a
multi-rack system.
3. Log in to Host 1 as root and run the crm_mon command to monitor the status of
HA services and cluster operations:
[root@nzhost1 ~]# crm_mon -i5
============
Last updated: Fri Aug 29 02:19:25 2014
Current DC: hostname-1 (3389b15b-5fee-435d-8726-a95120f437dd)
2 Nodes configured.
2 Resources configured.
============
5. Press Ctrl + C to exit the crm_mon command and return to the command
prompt.
6. Log in to the nz account:
[root@nzhost1 ~]# su - nz
Procedure
1. Log in to Host 1 (ha1) as root.
3. On the active host , in this example nzhost1, run the following commands to
stop the Netezza server:
[nz@nzhost1 ~]$ su - nz
[nz@nzhost1 ~]$ nzstop
[nz@nzhost1 ~]$ exit
5. Log in as root to the standby host, in this example nzhost2. Run the following
command to shut down the host:
[root@nzhost2 ~]# shutdown -h now
The system displays a series of messages as it stops processes and other system
activity. When it finishes, a message is displayed that indicates that it is now
safe to power down to the server.
6. Press the power button on Host 2 to power down that Netezza host. The
button is located in the front of the cabinet.
7. On Host 1, run the following command to shut down the Linux operating
system:
[root@nzhost1 ~]# shutdown -h now
The system displays a series of messages as it stops processes and other system
activity. When it finishes, a message is displayed that indicates that it is now
safe to power down to the server.
8. Press the power button on Host 1 to power down that Netezza host. The
button is located in the front of the cabinet.
Self-encrypting drives (SEDs) encrypt data as it is written to the disk. Each disk
has a disk encryption key (DEK) that is set at the factory and stored on the disk.
The disk uses the DEK to encrypt data as it writes, and then to decrypt the data as
it is read from disk. The operation of the disk, and its encryption and decryption,
is transparent to the users who are reading and writing data. This default
encryption and decryption mode is referred to as secure erase mode. In secure erase
mode, you do not need an authentication key or password to decrypt and read
data. SEDs offer improved capabilities for an easy and speedy secure erase for
situations when disks must be repurposed or returned for support or warranty
reasons.
For the optimal security of the data stored on the disks, SEDs have a mode
referred to as auto-lock mode. In auto-lock mode, the disk uses an authentication
encryption key (AEK) to protect its DEK. When a disk is powered off, the disks are
automatically locked. When the disk is powered on, the SED requires a valid AEK
to read the DEK and unlock the disk to proceed with read and write operations. If
the SED does not receive a valid authentication key, the data on the disk cannot be
read. The auto-lock mode helps to protect the data when disks are accidentally or
intentionally removed from the system.
In many environments, the secure erase mode may be sufficient for normal
operations and provides you with easy access to commands that can quickly and
securely erase the contents of the disk before a maintenance or repurposing task.
For environments where protection against data theft is paramount, the auto-lock
mode adds an extra layer of access protection for the data stored on your disks.
By default, the SEDs on the IBM PureData System for Analytics N3001 appliances
operate in secure erase mode. The IBM installation team can configure the disks to
run in auto-lock mode by creating a keystore and defining an authentication key
for your host and storage disks when the system is installed in your data center. If
you choose not to auto-lock the disks during system installation, you can lock
them later. Contact IBM Support to enable the auto-lock mode. The process to
auto-lock the disks requires a short NPS service downtime window.
The NPS system requires an AEK for the host drives and an AEK for the drives in
the storage arrays that are managed by the SPUs. You have two options for storing
the keys. The AEKs can be stored in a password protected keystore repository on
the NPS host, or if you have implemented an IBM Security Key Lifecycle Manager
(ISKLM) server, you can store the AEKs in your ISKLM server for use with the
appliance. The commands to create the keys are the same for locally or ISKLM
stored systems.
For locally stored keys, the key repository is stored in the /nz/var/keystore
directory on the NPS host. The repository is locked and protected.
For ISKLM configurations, there is no local keystore on the NPS hosts. The ISKLM
support requires some additional configuration for your NPS hosts to become a
client of the ISKLM server. The configuration steps are described in the section
“IBM Security Key Lifecycle Manager configuration steps” on page 6-4.
You should use the nzkeybackup command to create a backup copy of the AEKs
after you change the keys. If the keystore on the NPS host or the ISKLM server is
lost, the disks cannot be read. Make sure that you carefully protect the keystore
backups for the appliance in a secure area, typically in a location that is not on the
NPS hosts.
Note: When auto-lock mode is enabled, and a disk is failed over either
automatically or manually using the nzhw failover -id <diskHwId> command, the
system automatically securely erases the disk contents. Contact IBM Support for
assistance with the process to securely erase one or more disks on the system. If a
disk is physically removed from the system before it is failed over, the system
detects the missing drive and fails over to an available spare disk, but the removed
disk is not securely erased because it is no longer in the system. In auto-lock
mode, the disk is locked when it is powered down, so the contents are not
readable.
Starting in NPS release 7.2.1, you can configure your IBM PureData System for
Analytics N3001 models to send the AEKs to an IBM Security Key Lifecycle
Manager (ISKLM) server in your environment. The NPS support requires ISKLM
version 2.5.0.5 or later.
In this configuration, the ISKLM server only stores and sends the AEKs that are
manually generated on the NPS host. The ISKLM server cannot be used to
automatically create and rotate the AEKs on a scheduled basis. You must have an
ISKLM server already set up and running in your environment, and you need
assistance from the ISKLM administrator to add the NPS host as a client of the
Important: If you configure your N3001 system to use ISKLM as the key
repository, note that you cannot downgrade from NPS release 7.2.1 to an earlier 7.2
release unless you convert from ISKLM to a local keystore for your SEDs. The IBM
Netezza Software Upgrade Guide has instructions for disabling ISKLM support and
returning to a local keystore before downgrading.
Typically, after you configure SEDs to use auto-lock mode, you would never
change them back to the default secure erase mode. If for some reason you must
reconfigure the SEDs, it is possible to do so, but this process is very complex and
requires a lengthy service window and possible service charges. There is also a risk
of data loss especially if your backups for the system are stale or incomplete. Make
sure that reconfiguring your SEDs to secure erase mode is appropriate for your
environment.
CAUTION:
The process to reconfigure SEDs to secure erase mode from the auto-lock mode
is not a process that you can run on your own. You must work with IBM
Support to reset the system correctly.
There are two options for reconfiguring your host SEDs to secure erase mode:
v The first option is to have IBM Support replace your host drives with a set of
new drives that are custom-built with the correct releases of software for your
system. The host motherboards/planars must also replaced (or the host disks
securely erased) to clear the RAID controller NVRAM that holds the AEK.
Reconfiguring the host SEDs requires system downtime, charges for the
replacement disks and planars, and approximately a day of downtime to replace
the disks and restore your NPS host backups and metadata.
v The second option is to completely reinitialize your system to a factory default
level, then reload all your data from the most recent full backup. This option
could require a service window of several days for the reinitialization and
complete reload.
To change the storage array SEDs from auto-lock mode to standard secure erase
mode, there is an IBM Support process to disable the authentication key. This
process requires you to securely erase the storage drives and reload the full
database backups from your most recent NPS backup. If it is an option, such as for
a non-production test system, a full system reinitialization would also reset the
drives from auto-lock mode. You would then need to restore your NPS data from
your backups, or start creating new data from new load sources.
SED keystore
The keystore holds the AEKs for unlocking the host and SPU drives that are
configured to run in auto-lock mode.
Important: If you use the IBM Security Key Lifecycle Manager (ISKLM) to store
and retrieve the AEKs for your NPS appliance, you can lock the drives using a
local keystore and then migrate to ISKLM management of the keys, or you can
configure the system to use ISKLM to create the keys and lock the drives. See the
“IBM Security Key Lifecycle Manager configuration steps” section for the
instructions to configure ISKLM support. After you configure ISKLM, the keys are
sent to the ISKLM server for storage and are not stored locally on the system.
If you lose the keystore, either because the local keystore is corrupted or deleted,
or because connectivity to the ISKLM server is lost, you lose the ability to unlock
your SED drives when they power on. As a best practice, make sure that you have
a recent backup of the current keys. You use the nzkeybackup command to create a
compressed tar file backup of the current keystore. You should always back up the
keystore after any key changes. Make sure that you save the keystore backups in a
safe location away from the NPS appliance.
Note: The nzhostbackup also captures the local keystore in the host backup, but
nzkeybackup is better because it does not require you to pause the NPS system and
stop query activity, and nzkeybackup -sklm can capture the keys that are stored in
an ISKLM server.
You can use the nzkeyrestore command to restore a keystore from a keystore
backup file.
The following list summarizes the steps needed for the ISKLM server setup. It is
important to work with your IBM Security Key Lifecycle Manager (ISKLM) system
administrator to configure the ISKLM server to communicate with the NPS
appliance.
After the ISKLM administrator has added the NPS appliance to the ISKLM server,
make sure that you have the following information:
v The CA certificate and the client certificate in .pem format from the ISKLM
server
v The device group name created on the ISKLM server
v The device serial number created on the ISKLM server
v The ISKLM IP address and KMIP port value
To configure the ISKLM information on the NPS appliance, the NPS administrator
must do the following steps:
1. Log in to the active NPS host as the root user.
2. Save a copy of the CA certificate and client certificate files (must be in .pem
format) in the /nz/data/security directory.
3. Log in to the active NPS host as the nz user.
4. Using any text editor, edit the /nz/data/config/system.cfg file (or create the
file if it does not exist).
5. Define the following settings in the system.cfg file:
startup.kmipDevGrpSrNum = Device_serial_number
startup.kmipDevGrp = Device_group_name
startup.kmipClientCert = /nz/data/security/client.pem
startup.kmipClientKey = /nz/data/security/privkey.pem
startup.kmipCaCert = /nz/data/security/ca.pem
startup.keyMgmtServer = tls://ISKLM_IP_ADDRESS:KMIP_PORT
startup.keyMgmtProtocol = local
The keyMgmtProtocol = local setting indicates that the system uses a locally
managed keystore and keys. Keep the local setting until you verify that the
connections to the ISKLM server are correctly configured and working. After
that verification, and after uploading the AEKs to the ISKLM server, you can
change the setting to use the ISKLM keystore.
6. Save the system.cfg file.
7. Log out of the nz account and return to the root account.
As root, use the nzkmip test command on the NPS host to test ISKLM
connectivity. This command requires you to specify a label and key (either directly
or in a file) to test the ISKLM server operations:
[root@nzhost ~]# /nz/kit/bin/adm/nzkmip test -label spuaek
-file /tmp/new_spukey.pem
Connecting to SKLM server at tls://1.2.3.4:5696
Success: Connection to SKLM store succeeded
After you confirm that the ISKLM connection is working, follow these steps to
prepare for switching over to the ISKLM server.
1. As root, run the following command to populate the keys from the local
keystore to the ISKLM keystore:
[root@nzhost ~]# /nz/kit/bin/adm/nzkmip populate
2. To confirm that the keys were populated correctly, query the _t_kmip_mapping
table:
SYSTEM.ADMIN(ADMIN)=> select * from _t_kmip_mapping;
DISKLABEL | UID
-------------+-----------------------------------------
spuaek | KEY-56e36030-3a9c-4313-8ce6-4c6d5d898211
spuaekOld | KEY-56e36030-3a9c-4313-8ce6-4c6d5d898312
hostkey1 | KEY-56e36030-3a9c-4313-8ce6-4c6d5d898432
hostkey1Old | KEY-56e36030-3a9c-4313-8ce6-4c6d5d898541
hostkey2 | KEY-56e36030-3a9c-4313-8ce6-4c6d5d898865
hostkey2Old | KEY-56e36030-3a9c-4313-8ce6-4c6d5d898901
(6 rows)
3. For each UUID listed in the table, run the following command to display the
value of the key:
[root@nzhost ~]# /nz/kit/bin/adm/nzkmip get
-uuid KEY-56e36030-3a9c-4313-8ce6-4c6d5d898211
Key Value : t7Nº×nq¦CÃ<"*"ºìýGse»¤;|%
4. Create a backup of the local keystore with nzkeybackup. As a best practice, save
the backup to a secure location away from the NPS host.
After you have completed and tested the ISKLM connection, and you have created
a local keystore backup file, follow these steps to switch to the ISKLM server:
1. Log in to the NPS host as the nz user.
2. Stop the system using the nzstop command.
3. Rename the local GSKit keystore to keydb.pl2 and keydb.sth files.
4. Log in as root and edit the /nz/data/config/system.cfg file.
5. Change the setting for the keyMgmtProtocol to kmipv1.1 to switch to the
ISKLM server support:
startup.keyMgmtProtocol = kmipv1.1
6. Save and close the system.cfg file.
7. Log out of the root account to return to the nz account.
8. Start the system using the nzstart command. After the system starts, AEKs that
you create with the nzkey command are stored in and retrieved from the
ISKLM server.
9. Remove the renamed GSKit keystore files keydb.pl2 and keydb.sth.
If you need to change the NPS host to disable ISKLM support and return to a local
GSKit keystore for managing the keys, follow these steps:
1. Log in as root to the NPS host.
2. Dump the keys from ISKLM server to a local GSKit keystore:
[root@nzhost ~]# /nz/kit/bin/adm/nzkey dump
DB creation successful
After you have dumped the AEKs from the ISKLM server, follow these steps to
switch to a local keystore for the AEKs:
1. Log in to the NPS host as the nz user.
2. Stop the system using the nzstop command.
3. Log in as root and edit the /nz/data/config/system.cfg file.
4. Change the setting for the keyMgmtProtocol to local to switch to the local
GSKit keystore support:
startup.keyMgmtProtocol = local
5. Save and close the system.cfg file.
6. Run the following command to verify that the keys were dumped correctly:
[root@nzhost ~]# /nz/kit/bin/adm/nzkey list
7. Log out of the root account to return to the nz account.
8. Start the system using the nzstart command.
9. After the system starts, use the nzsql command to connect to the SYSTEM
database and delete entries from the _t_kmip_mapping table because the
system is now using a local GSKit keystore.
SYSTEM.ADMIN(ADMIN)=> truncate table _t_kmip_mapping;
TRUNCATE TABLE
After the system starts, AEKs that you create with the nzkey command are stored
and retrieved from the local keystore.
You can create and apply an authentication key to auto-lock the host drives and
the drives in the storage arrays. An authentication key must be 32 bytes. The keys
are managed using the IBM GSKit software. No other key management software or
server is required.
CAUTION:
Always protect and back up the authentication keys that you create and apply to
the disks. If you lose the keys, the disks cannot be unlocked when they are
powered on. You will be unable to read data from the disks, and you could
prevent the NPS system from starting.
You could create a conforming key for the host and SPU AEKs, but as a best
practice, you should use the nzkey generate command to automatically create a
random, conformant AEK for the host or SPU drives and store it in your local
keystore or in the ISM Security Key Lifecycle Manager if you have configured that
support for your appliance.
Each of the hosts in the appliance use an AEK to auto-lock the SEDs. The keys are
referred to as hostkey1 and hostkey2. The host RAID controllers have specific
requirements for the host authentication keys:
v The key value must be 32 bytes in length.
v The key is case-sensitive.
v The key must contain at least one number, one lowercase letter, one uppercase
letter, and one non-alphanumeric character (for example, < > @ +). You cannot
Chapter 6. About self-encrypting drives 6-7
specify a blank space, single quotation character, double quotation character,
exclamation point, or equals sign in the key value.
v The key can use only the printable characters in the range ASCII 0x21 to 0x7E.
The SEDs in the storage arrays use the SPU AEK to auto-lock the drives. The
storage array SPU keys must meet the following requirements:
v The key value must be 32 bytes in length.
v The key can use characters in the range ASCII from 0x00 to 0xFF.
If you want to change the host or SPU key that is used to lock your SEDs, you can
create a key manually, or you can use the nzkey generate command to create a
conforming key. Run separate commands to create the host key and the SPU key.
Procedure
1. Log in to the active NPS host as the root user.
2. Use the following command to create a host key:
[root@nzhost1 nz]# /nz/kit/bin/adm/nzkey generate -hostkey
-file /export/home/nz/hostkey.txt
Host key written to file
3. Use the following command to create a SPU key:
[root@nzhost1 nz]# /nz/kit/bin/adm/nzkey generate -spukey
-file /export/home/nz/spukey.txt
SPU key written to file
Results
The command creates saves the key in the specified file in plaintext. You can then
specify the host or key file as part of an nzkey change operation.
Important: The key files are in plain text and unencrypted. After you use the files
to change the key for the hosts or SPUs, make sure that you delete the generated
key files to protect the keys from being read by users who log in to the NPS
system.
You can use the nzkey list command to display information about the keys that
are currently defined in the keystore without displaying the key text.
Procedure
1. Log in to the active NPS host as the root user.
2. Use the following command to list the key labels:
Results
The command shows the labels for the keys that are currently in the keystore. If
AEKs has not been set, the command displays the message No keys found in key
store. You can use the -hostkey or -spukey option to list only the AEK labels for
the hosts or SPU.
You can use the nzkey check command to display information about auto-lock
state for the SEDs on the hosts and SPUs.
Procedure
1. Log in to the active NPS host as the root user.
2. Use the following command to check the AEK status. You must specify the
-spukey or the -hostkey option.
[root@nzhost1 nz]# /nz/kit/bin/adm/nzkey check {-spukey | -hostkey}
The command displays the following output.
The command provides more information about whether AEK feature is enabled or
disabled, and whether keys have been applied to auto-lock the SEDs in the hosts
and storage arrays. The command also provides information to alert you when
there may be issues with the drives that need further investigation and possible
troubleshooting from IBM Support.
You can use the nzkey list command to list the available key labels defined in the
keystore. You can extract only one key to a file. If the file exists, the command
displays an error.
Procedure
1. Log in to the active NPS host as the root user.
2. Use the following command to extract the key for a specified label. For
example:
[root@nzhost1 nz]# /nz/kit/bin/adm/nzkey extract -label hostkey1
-file /nz/var/hostkey1.txt
Key written to file
Results
The command creates a file with the extracted AEK. This file can be helpful in
cases where you need the current key to reapply a key to SEDs for
troubleshooting, or if you want to preserve the key in a third-party key tracking
system. As a best practice, make sure that the output file is safe from unauthorized
access. Consider deleting the file or moving it to a secure location to protect the
key.
Before you begin, make sure that you have your new AEK for the hosts. You
should use the nzkey generate command to generate a new AEK for the host key.
To change the host AEK, the NPS system must be in the Stopped state. The new
AEK takes effect on both hosts when the nzkey command finishes running
successfully. The command creates a backup copy of the current keystore before it
changes the key. After the change is finished, you should create a backup of the
new keystore using the nzkeybackup command.
Procedure
1. Log in to the active host of the NPS system as the nz user.
2. Transition the system to the Stopped state, for example:
What to do next
You would typically use the nzkey resume command to resume a host AEK change
operation that was interrupted and did not complete. This command can also be
used to resume a host AEK create operation, but typically the IBM installers or
IBM support perform the tasks to create and enable the AEKs to auto-lock drives.
To resume the host AEK operation, you must have the backup file pathname for
the interrupted operation.
Procedure
1. Log in to the active NPS host as the root user.
2. Use the following command to resume a host AEK change operation. For
example:
[root@nzhost1 nz]# /nz/kit/bin/adm/nzkey resume
-backupDir /nz/var/hostbup_01
Results
The command resumes the host key operation. If the command displays an error,
contact IBM Support for assistance.
Before you begin, make sure that you have your new AEK for the SPU. You should
use the nzkey generate command to generate a new AEK for the SPU key.
If you are changing the SPU key for the storage array drives, system must be in
the Paused or Offline mode because the system manager must be running to
propagate the new key but no queries or I/O activity should be active. The new
AEK is immediately communicated from the system manager to the SPUs. Note
that if you attempt to transition the system to the Online state, the state transition
wait until all the SPUs and disks are updated with the new AEK. The command
creates a backup copy of the current keystore before it changes the key. After the
change is finished, you should create a backup of the new keystore using the
nzkeybackup command.
Procedure
1. Log in to the active host of the NPS system as the nz user.
2. Transition the system to the Paused or Offline state, for example:
[nz@nzhost1 ~]$ nzsystem pause
Are you sure you want to pause the system (y|n)? [n] y
3. Log in as the root user:
[nz@nzhost1 ~]$ su - root
4. Use the nzkey change command to change the SPU key:
[root@nzhost-h1 ~] /nz/kit/bin/adm/nzkey change -spukey
-file /tmp/spukey_change -backupdir /tmp/backups/
# Keystore archive /tmp/backups/keydb_20140711054140.tar.gz written
==========================================================
AEK Summary
==========================================================
What to do next
The AekSecurityEvent monitors the SED drives and sends an email to the
configured event contacts when any of the following conditions occur:
v The system has transitioned to the Down state because of a SPU AEK operation
failure.
v A SPU AEK operation has occurred, such as successful completion of key create
or change for the SPU key.
v A labelError has been detected on a disk for the SPU key. A labelError typically
occurs when the new SPU key is not applied to a disk and the disk still uses the
old/former key to authenticate.
v A fatal error is detected on a disk for the SPU key. A fatal error occurs when
neither the current SPU key nor the previous SPU key can be used to key to
authenticate the drive.
v A key repair state is detected on a disk during a SPU key create or change. A
key repair state issue occurs when the key operation is deferred on a SED
because of a key fatal error on the drive's RAID partner disk.
v The system manager has started a key repair operation. This usually occurs just
before applying the key on the deferred disk after the regen on the disk has
finished.
To create and enable an event rule for the AekSecurityEvent, you use the nzevent
command to add an event rule as in the following example. Make sure that you
run the command on the active host.
[nz@nzhost1 ~]$ nzevent copy -useTemplate
-name AekSecurityEvent -newName SedAekEvent -eventType AekSecurityEvent
-on 1 -dst user@mycompany.com
This section also describes log files and where to find operational and error
messages for troubleshooting activities. Although the system is configured for
typical use in most customer environments, you can also tailor software operations
to meet the special needs of your environment and users by using configuration
settings.
The revision level typically includes a major version number, a release number, a
maintenance release number, and a fix pack number. Some releases also include a
patch designation such as P1 or P2.
When you enter the nzrev -rev command, Netezza returns the entire revision
number string, including all fields (such as variant and patch level, which in this
example are both zero).
nzrev -rev
7.1.0.0-P0-F1-Bld34879
From a client system, you can use the following command to display the revision
information:
nzsystem showRev -host host -u user -pw password
Related reference:
“The nzrev command” on page A-47
Use the nzrev command to display the IBM Netezza software revision level.
The following table describes the components of the Revision Stamp fields.
Table 7-1. Netezza software revision numbering
Version Release Maintenance Fixpack -Pn -Fn -Bldn
Numeric Numeric Numeric Numeric Alphanumeric Alphanumeric Alphanumeric
System states
The IBM Netezza system state is the current operational state of the appliance.
In most cases, the system is online and operating normally. There might be times
when you must stop the system for maintenance tasks or as part of a larger
procedure.
You can manage the Netezza system state by using the nzstate command. It can
display and wait for a specific state to occur.
Related reference:
“The nzstate command” on page A-58
Use the nzstate command to display the current system state or to wait for a
particular system state to occur.
The following table lists the common system states and how they are invoked and
exited.
Run a query.
Note: When you stop and start the Netezza system operations on a Netezza C1000
system, the storage groups continue to run and perform tasks such as media
checks and health checks for the disks in the array, as well as disk regenerations
for disks that fail. The RAID controllers are not affected by the Netezza system
state.
You can use the nzstart command to start system operation if the system is in the
stopped state. The nzstart command is a script that initiates a system start by
setting up the environment and invoking the startup server. The nzstart command
does not complete until the system is online. The nzstart command also verifies
the host configuration to ensure that the environment is configured correctly and
completely; it displays messages to direct you to files or settings that are missing
or misconfigured.
Restriction: You must run nzstart on the host and be logged on as the user nz.
You cannot run it remotely from Netezza client systems.
For IBM Netezza 1000 and IBM PureData System for Analytics N1001 systems, a
message is written to the sysmgr.log file if there are any storage path issues that
are detected when the system starts. The log displays a message similar to mpath
-issues detected: degraded disk path(s) or SPU communication error, which
helps to identify problems within storage arrays.
Related reference:
“The nzstart command” on page A-56
Use the nzstart command to start system operation after you stop the system. The
nzstart command is a script that initiates a system start by setting up the
environment and starting the startup server.
Restriction: You must run nzstop on the host and be logged on as the user nz.
You cannot run it remotely.
To stop the system or exit after waiting for 5 minutes (300 seconds), enter nzstop
-timeout 300.
Related reference:
“The nzstop command” on page A-63
Use the nzstop command to stop the IBM Netezza software operations. Stopping a
system stops all the IBM Netezza processes that were started with the nzstart
command.
Enter y to continue. The transition completes quickly on an idle system, but it can
take much longer if the system is busy processing active queries and transactions.
When the transition completes, the system enters the paused state, which you can
confirm with the nzstate command as follows:
[nz@nzhost ~]$ nzstate
System state is ’Paused’.
You can use the -now option to force a transition to the paused state, which causes
the system to abort any active queries and transactions. As a best practice, use the
nzsession show -activeTxn command to display a list of the current active
transactions before you force the system to terminate them.
The command usually completes quickly; you can confirm that the system has
returned to the online state by using the following command:
[nz@nzhost ~]$ nzstate
System state is ’Online’.
Enter y to continue. The transition completes quickly on an idle system, but it can
take much longer if the system is busy processing active queries and transactions.
When the transition completes, the system enters the offline state, which you can
confirm with the nzstate command as follows:
[nz@nzhost ~]$ nzstate
System state is ’Offline’.
You can use the -now option to force a transition to the offline state, which causes
the system to abort any active queries and transactions. As a best practice, use the
nzsession show -activeTxn command to display a list of the current active
transactions before you force the system to terminate them.
Related reference:
“System logs” on page 7-12
When you power up (or reset) the hardware, each SPU loads an image from its
flash memory and runs it. This image is then responsible for running diagnostic
tests on the SPU, registering the SPU with the host, and downloading runtime
images for the SPU CPU and the FPGA disk controller. The system downloads
these images from the host through TFTP.
The IBM Netezza system can take the following actions when an error occurs:
Display an error message
Presents an error message string to the users that describes the error. This
is the common system response whenever a user request is not fulfilled.
Try again
During intermittent or temporary failures, keep trying until the error
condition disappears. The retries are often needed when resources are
limited, congested, or locked.
Fail over
Switches to an alternate or spare component because an active component
has failed. Failover is a system-level recovery mechanism and can be
triggered by a system monitor or an error that is detected by software that
is trying to use the component.
Log the error
Adds an entry to a component log. A log entry contains a date and time, a
severity level, and an error/event description.
Send an event notification
Sends notification through email or by running a command. The decision
whether to send an event notification is based on a set of user-configurable
event rules.
Abort the program
Terminates the program because it cannot continue because of an
irreparably damaged internal state or because continuing would corrupt
user data. Software asserts that detect internal programming mistakes often
fall into this category because it is difficult to determine that it is safe to
continue.
Clean up resources
Frees or releases resources that are no longer needed. Software components
are responsible for their own resource cleanup. In many cases, resources
System logs
All major software components that run on the host have an associated log. Log
files have the following characteristics:
v Each log consists of a set of files that are stored in a component-specific
directory. For managers, there is one log per manager. For servers, there is one
log per session, and their log files have pid identifiers, date identifiers, or both
(<pid>.<yyyy-mm-dd>).
v Each file contains one day of entries, for a default maximum of seven days.
v Each file contains entries that have a timestamp (date and time), an entry
severity type, and a message.
The system rotates log files, that is, for all the major components there are the
current log files and the archived log files.
v For all IBM Netezza components (except postgres), the system creates a new log
file at midnight if there is constant activity for that component. If, however you
load data on Monday and then do not load again until Friday, the system creates
a new log file dated the previous day from the new activity, in this case,
Thursday. Although the size of the log files is unlimited, every 30 days the
system removes all log files that were not accessed.
v For postgres logs, by default, the system checks the size of the log file daily and
rotates it to an archive file if it is greater than 1 GB in size. The system keeps 28
days (four weeks) of archived log files. (Netezza Support can help you to
customize these settings if needed.)
To view the logs, log on to the host as user nz. When you view an active logfile,
use a file viewer command such as more, less, cat, tail, or similar commands. If
you use a text editor such as emacs or vi, you could cause an interruption and
possible information loss to log files that are actively capturing log messages while
the system is running.
Related concepts:
“Logging Netezza SQL information” on page 11-39
You can log information about all user or application activity on the server, and
you can log information that is generated by individual Windows clients.
Related tasks:
“Logging Netezza SQL information on the server” on page 11-39
Related reference:
“Overview of the Netezza system processing” on page 7-8
Log file
/nz/kit/log/bnrmgr/bnrmgr.log
Current backup and restore manager log
Sample messages
2012-12-12 18:12:05.645586 EST Info: NZ-00022: --- program ’bnrmgr’ (26082)
starting on host ’nzhost’ ... ---
2012-12-12 18:17:09.315244 EST Info: system is online - enabling backup and
restore sessions
Bootserver manager
The bootsvr.log file records the initiation of all SPUs on the system, usually when
the system is restarted by the nzstart command and also all stopping and
restarting of the bootsvr process.
Log files
/nz/kit/log/bootsvr/bootsvr.log
Current log
/nz/kit/log/bootsvr/bootsvr.YYYY-MM-DD.log
Archived log
Sample messages
2012-12-12 18:12:07.399506 EST Info: NZ-00022: --- program ’bootsvr’ (26094)
starting on host ’nzhost’ ... ---
2012-12-12 18:15:25.242471 EST Info: Responded to boot request from device
[ip=10.0.14.28 SPA=1 Slot=1] Run Level = 3
Client manager
The clientmgr.log file records all connection requests to the database server and
also all stopping and starting of the clientmgr process.
Log files
/nz/kit/log/clientmgr/clientmgr.log
Current log
/nz/kit/log/clientmgr/clientmgr.YYYY-MM-DD.log
Archived log
Sample messages
2012-12-12 18:12:05.874413 EST Info: NZ-00022: --- program ’clientmgr’ (26080)
starting on host ’nzhost’ ... ---
2012-12-12 18:12:05.874714 EST Info: Set timeout for receiving from the socket
300 sec.
2012-12-12 18:17:21.642075 EST Info: admin: login successful
Log files
/nz/kit/log/dbos/dbos.log
Current log
/nz/kit/log/dbos/dbos.YYYY-MM-DD.log
Archived log
Event manager
The eventmgr.log file records system events and the stopping and starting of the
eventmgr process.
Log files
/nz/kit/log/eventmgr/eventmgr.log
Current log
/nz/kit/log/eventmgr/eventmgr.YYYY-MM-DD.log
Archived log
Sample messages
2012-12-12 18:12:05.926667 EST Info: NZ-00022: --- program ’eventmgr’ (26081)
starting on host ’nzhost’ ... ---
2012-12-12 18:13:25.064891 EST Info: received & processing event type =
hwNeedsAttention, event args = ’hwId=1037, hwType=host, location=upper host,
devSerial=06LTY66, eventSource=system, errString=Eth RX Errors exceeded threshold,
reasonCode=1052’ event source = ’System initiated event’
2012-12-12 18:16:45.987066 EST Info: received & processing event type =
sysStateChanged, event args = ’previousState=discovering, currentState=initializing,
eventSource=user’ event source = ’User initiated event’
Event type
The event that triggered the notification.
Event args
The argument that is being processed.
ErrString
The event message, which can include hardware identifications and other
details.
Log files
/nz/kit/log/fcommrtx/fcommrtx.log
Current® log
/nz/kit/log/fcommrtx/fcommrtx.2006-03-01.log
Archived log
Sample messages
2012-12-12 18:12:03.055247 EST Info: NZ-00022: --- program ’fcommrtx’ (25990) star
ting on host ’nzhost’ ... ---
2012-12-12 18:12:03.055481 EST Info: FComm : g_defenv_spu2port=0,6,1,7,2,8,3,9,4,1
0,5,11,6,0,7,0,8,1,9,2,10,3,11,4,12,5,13,0
2012-12-12 18:12:03.055497 EST Info: FComm : g_defenv_port2hostthread=0,1,2,3,4,5,
6,7,8,9,10,11,12,13
Log files
/nz/kit/log/hostStatsGen/hostStatsGen.log
Current log
/nz/kit/log/hostStatsGen/hostStatsGen.YYYY-MM-DD.log
Archived log
Sample messages
2012-12-12 18:12:04.969116 EST Info: NZ-00022: --- program ’hostStatsGen’ (26077)
starting on host ’nzhost’ ... ---
Load manager
The loadmgr.log file records details of load requests, and the stopping and starting
of the loadmgr process.
Log file
/nz/kit/log/loadmgr/loadmgr.log
Current log
/nz/kit/log/loadmgr/loadmgr.YYYY-MM-DD.log
Archived log
Sample messages
2004-05-13 14:45:07.454286 EDT Info: NZ-00022:
--- log file ’loadmgr’ (12225) starting on host ’nzhost’ ...
Postgres
The postgres.log file is the main database log file. It contains information about
database activities.
Sample messages
2012-12-31 04:02:10.229470 EST [19122] DEBUG: connection: host=1.2.3.4 user=
MYUSR database=SYSTEM remotepid=6792 fetype=1
2012-12-31 04:02:10.229485 EST [19122] DEBUG: Session id is 325340
2012-12-31 04:02:10.231134 EST [19122] DEBUG: QUERY: set min_quotient_scale to
default
2012-12-31 04:02:10.231443 EST [19122] DEBUG: QUERY: set timezone = ’gmt’
2012-12-31 09:02:10.231683 gmt [19122] DEBUG: QUERY: select current_timestamp,
avg(sds_size*1.05)::integer as avg_ds_total, avg(sds_used/(1024*1024))::integer as
avg_ds_used from _v_spudevicestate
Session manager
The sessionmgr.log file records details about the starting and stopping of the
sessionmgr process, and any errors that are associated with this process.
Log files
/nz/kit/log/sessionmgr/sessionmgr.log
Current log
/nz/kit/log/sessionmgr/sessionmgr.YYYY-MM-DD.log
Archived log
Sample messages
2012-12-12 18:11:50.868454 EST Info: NZ-00022: --- program ’sessionmgr’ (25843)
starting on host ’nzhost’ ... ---
Startup server
The startupsvr.log file records the start of the IBM Netezza processes and any
errors that are encountered with this process.
Log files
/nz/kit/log/startupsvr/startupsvr.log
Current log
/nz/kit/log/startupsvr/startupsvr.YYYY-MM-DD.log
Archived log
Sample messages
2012-12-12 18:11:43.951689 EST Info: NZ-00022: --- program ’startupsvr’ (25173)
starting on host ’nzhost’ ... ---
2012-12-12 18:11:43.952733 EST Info: NZ-00307: starting the system, restart = no
2012-12-12 18:11:43.952778 EST Info: NZ-00313: running onStart: ’prepareForStart’
2012-12-12 18:11:43 EST: Rebooting SPUs via RICMP ...
Log files
/nz/kit/log/statsSvr/statsSvr.log
Current log
/nz/kit/log/statsSvr/statsSvr.YYYY-MM-DD.log
Archived log
Sample messages
2012-12-12 18:12:05.794050 EST Info: NZ-00022: --- program ’statsSvr’ (26079)
starting on host ’nzhost’ ... ---
System Manager
The sysmgr log file records details of stopping and starting the sysmgr process,
and details of system initialization and system state status.
Log file
/nz/kit/log/sysmgr/sysmgr.log
Current log
/nz/kit/log/sysmgr/sysmgr.YYYY-MM-DD.log
Archived log
Output
2012-12-12 18:12:05.578573 EST Info: NZ-00022: --- program ’sysmgr’ (26078) starting
on host ’nzhost’ ... ---
2012-12-12 18:12:05.579716 EST Info: Starting sysmgr with existing topology
2012-12-12 18:12:05.882697 EST Info: Number of chassis level switches for each
chassis in this system: 1
The file on the Linux host for this disk work area is $NZ_TMP_DIR/nzDbosSpill.
Within DBOS, there is a database that tracks segments of the file presently in use.
To avoid having a runaway query use up all the host computer disk space, there is
a limit on the DbosEvent database, and hence the size of the Linux file. This limit
is in the Netezza Registry file. The tag for the value is
startup.hostSwapSpaceLimit.
For example:
v To display all system registry settings, enter:
nzsystem showRegistry
A change made in this way remains effective only until the system is
restarted; at system startup, all configuration settings are read from the
system configuration file and loaded into the registry.
Permanently
To change a configuration setting permanently, edit the corresponding line
in the configuration file, system.cfg. Configuration settings are loaded
from this file to the registry during system startup.
The following tables describe the configuration settings that you can change
yourself, without involving your IBM Netezza support representative.
Table 7-6. Configuration settings for short query bias (SQB)
Setting Type Default Description
host.schedSQBEnabled bool true Whether SQB is enabled (true) or disabled (false).
host.schedSQBNominalSecs int 2 The threshold, in seconds, below which a query is to be
regarded as being short.
host.schedSQBReservedGraSlots int 10 The number of GRA scheduler slots that are to be reserved
for short queries.
host.schedSQBReservedSnSlots int 6 The number of snippet scheduler slots that are to be reserved
for short queries.
host.schedSQBReservedSnMb int 50 The amount of memory, in MB, that each SPU is to reserve
for short query execution.
host.schedSQBReservedHostMb int 64 The amount of memory, in MB, that the host is to reserve for
short query execution.
Related reference:
“The nzsystem command” on page A-65
Use the nzsystem command to change the system state, and show and set
configuration information.
You can configure the event manager to continually watch for specific conditions
such as system state changes, hardware restarts, faults, or failures. In addition, the
event manager can watch for conditions such as reaching a certain percentage of
full disk space, queries that have run for longer than expected, and other Netezza
system behaviors.
This section describes how to administer the Netezza system by using event rules
that you create and manage.
To help ease the process of creating event rules, IBM Netezza supplies template
event rules that you can copy and tailor for your system. The template events
define a set of common conditions to monitor with actions that are based on the
type or effect of the condition. The template event rules are not enabled by default,
and you cannot change or delete the template events. You can copy them as starter
rules for more customized rules in your environment.
As a best practice, you can begin by copying and by using the template rules. If
you are familiar with event management and the operational characteristics of
your Netezza appliance, you can also create your own rules to monitor conditions
that are important to you. You can display the template event rules by using the
nzevent show -template command.
Note: Release 5.0.x introduced new template events for the IBM Netezza 100, IBM
Netezza 1000, and later systems. Previous event template rules specific to the
z-series platform do not apply to the new models and were replaced by similar,
new events.
Netezza might add new event types to monitor conditions on the system. These
event types might not be available as templates, which means you must manually
add a rule to enable them. For a description of more event types that can assist
you with monitoring and managing the system, see “Event types reference” on
page 8-36.
The action to take for an event often depends on the type of event (its effect on the
system operations or performance). The following table lists some of the
predefined template events and their corresponding effects and actions.
Table 8-2. Netezza template event rules
Template name Type Notify Severity Effect Action
Disk80PercentFull hwDiskFull Admins, Moderate Full disk Reclaim space or remove
(Notice) DBAs to Serious prevents some unwanted databases or older
Disk90PercentFull operations. data. For more information, see
“Disk space threshold
notification” on page 8-22.
HardwareNeeds hwNeeds Admins, Moderate Possible Investigate and identify whether
Attention Attention NPS change or issue more assistance is required from
that can start Support. For more information,
to affect see “Hardware needs attention”
performance. on page 8-20.
Hardware hwRestarted Admins, Moderate Any query or Investigate whether the cause is
Restarted (Notice) NPS data load in hardware or software. Check for
progress is lost. SPU cores. For more information,
see “Hardware restarted” on
page 8-22.
HardwareService hwService Admins, Moderate Any query or Contact Netezza. For more
Requested Requested NPS to Serious work in information, see “Hardware
(Warning) progress is lost. service requested” on page 8-18.
Disk failures
initiate a
regeneration.
You can copy, modify, and add events by using the nzevent command or the
NzAdmin interface. You can also generate events to test the conditions and event
notifications that you are configuring. The following sections describe how to
manage events by using the nzevent command. The NzAdmin interface has an
intuitive interface for managing events, including a wizard tool for creating events.
For information about accessing the NzAdmin interface, see “NzAdmin overview”
on page 3-12.
When you copy a template event rule, which is disabled by default, your new rule
is likewise disabled by default. You must enable it by using the -on yes argument.
In addition, if the template rule sends email notifications, you must specify a
destination email address.
The following example copies, renames, and modifies an existing event rule:
nzevent copy -u admin -pw password -name NPSNoLongerOnline -newName
MyModNPSNoLongerOnline -on yes -dst jdoe@company.com -ccDst
tsmith@company.com -callhome yes
When you copy an existing user-defined event rule, your new rule is enabled
automatically if the existing rule is enabled. If the existing rule is disabled, your
new rule is disabled by default. You must enable it by using the -on yes argument.
You must specify a unique name for your new rule; it cannot match the name of
the existing user-defined rule.
Generate an event
You can use the nzevent generate command to trigger an event for the event
manager. If the event matches a current event rule, the system takes the action that
is defined by the event rule.
If the event that you want to generate has a restriction, specify the arguments that
would trigger the restriction by using the -eventArgs option. For example, if a
runaway query event has a restriction that the duration of the query must be
greater than 30 seconds, use a command similar to the following to ensure that a
generated event is triggered:
nzevent generate -eventtype runawayquery -eventArgs ’duration=50’
In this example, the duration meets the event criteria (greater than 30) and the
event is triggered. If you do not specify a value for a restriction argument in the
-eventArgs string, the command uses default values for the arguments. In this
example, duration has a default of 0, so the event would not be triggered since it
did not meet the event criteria.
Adding an event rule consists of two tasks: specifying the event match criteria and
specifying the notification method. These tasks are described in more detail after
the examples.
Note: Although the z-series events are not templates on IBM Netezza 1000 or
N1001 systems, you can add them by using nzevent if you have the syntax that is
documented in the previous releases. However, these events are not supported on
IBM Netezza 1000 or later systems.
To add an event rule that sends an email when the system transitions from the
online state to any other state, enter:
nzevent add -name TheSystemGoingOnline -u admin -pw password
-on yes -eventType sysStateChanged -eventArgsExpr ’$previousState
== online && $currentState != online’ -notifyType email -dst
jdoe@company.com -msg ’NPS system $HOST went from $previousState to
$currentState at $eventTimestamp.’ -bodyText
’$notifyMsg\n\nEvent:\n$eventDetail\nEvent
Rule:\n$eventRuleDetail’
Note: If you are creating event rules on a Windows client system, use double
quotation marks instead of single quotation marks to specify strings.
Related concepts:
“Callhome file” on page 5-19
The event manager generates notifications for all rules that match the criteria, not
just for the first event rule that matches. The following table lists the event types
that you can specify and the arguments and the values that are passed with the
event. You can list the defined event types by using the nzevent listEventTypes
command. Used only on z-series systems such as the 10000-series, 8000z-series, and
5200-series systems.
Table 8-3. Event types
Event type Tag name Possible values
sysStateChanged previousState, currentState, <any system state>, <Event
eventSource Source>
For example, to receive an email when the system is not online, it is not enough to
create an event rule for a sysStateChanged event. Because the sysStateChange
event recognizes every state transition, you can be notified whenever the state
changes at all, such as from online to paused.
You can add an event args expression to further qualify the event for notification.
If you specify an expression, the system substitutes the event arguments into the
expression before evaluating it. The system uses the result combined with the
event type to determine a match. So, to send an email message when the system is
no longer online, you would use the expression: $previousState == online &&
Event notifications
When an event occurs, you can have the system send an email or run an external
command. Email can be aggregated whereas commands cannot.
v To specify an email, you must specify a notification type (-notifyType email), a
destination (-dst), a message (-msg), and optionally, a body text (-bodyText), and
the callhome file (-callHome).
You can specify multiple email addresses that are separated by a comma and no
space. For example,
jdoe@company.com,jsmith@company.com,sbrown@company.com
v To specify that you want to run a command, you must specify a notification
type (-notifyType runCmd), a destination (-dst), a message (-msg), and
optionally, a body text (-bodyText), and the callhome file (-callHome).
When you are defining notification fields that are strings (-dst, -ccDst, -msg,
-bodyText), you can use $tag syntax to substitute known system or event values.
Table 8-5 on page 8-13 lists the system-defined tags that are available.
Related concepts:
“Event email aggregation” on page 8-14
The sendmail.cfg file also contains options that you can use to specify a user
name and password for authentication on the mail server. You can find a copy of
this file in the /nz/data/config directory on the IBM Netezza host.
If you specify the email or runCmd arguments, you must enter the destination and
the subject header. You can use all the following arguments with either command,
except the -ccDst argument, which you cannot use with the runCmd. The
following table lists the syntax of the message.
Table 8-6. Notification syntax
Argument Description Example
-dst Your email address -dst 'jdoe@company.com,bsmith@company.com'
If you set email aggregation and events-per-rule reach the threshold value for the
event rule or the time interval expires, the system aggregates the events and sends
a single email per event rule.
Note: You specify aggregation only for event rules that send email, not for event
rules that run commands.
Related concepts:
“Event notifications” on page 8-12
Related reference:
“Hardware restarted” on page 8-22
If you enable the event rule HardwareRestarted, you receive notifications when
each SPU successfully restarts (after the initial startup). Restarts are usually related
to a software fault, whereas hardware causes can include uncorrectable memory
faults or a failed disk driver interaction.
“Disk space threshold notification” on page 8-22
You can enable event aggregation system-wide and specify the time interval. You
can specify 0 - 86400 seconds. If you specify 0 seconds, there is no aggregation,
even if aggregation is specified on individual events.
Procedure
1. Pause the system using the command nzsystem pause -u bob -pw 1234 -host
nzhost
2. Specify aggregation of 2 minutes (120 seconds), enter nzsystem set -arg
sysmgr.maxAggregateEventInterval=120
3. Resume the system, enter nzsystem resume -u bob -pw 1234 -host nzhost
4. Display the aggregation setting, enter nzsystem showRegistry | grep
maxAggregateEventInterval
The body of the message lists the messages by time, with the earliest events first.
The Reporting Interval indicates whether the notification trigger was the count or
time interval. The Activity Duration indicates the time interval between the first
and last event so that you can determine the granularity of the events.
For example, the following aggregation is for the Memory ECC event:
Subject: NPS nzdev1 : 2 occurrences of Memory ECC Error from 11-Jun-07
18:41:59 PDT over 2 minutes.
You can use the Custom1 and Custom2 event rules to define and generate events
of your own design for conditions that are not already defined as events by the
NPS software. An example of a custom event might be to track the user login
information, but these events can also be used to construct complex events.
If you define a custom event, you must also define a process to trigger the event
using the nzevent generate command. Typically, these events are generated by a
customer-created script which is invoked in response to either existing NPS events
or other conditions that you want to monitor.
Procedure
1. Use the nzevent add command to define a new event type. Custom events are
never based on any existing event types. This example creates three different
custom events. nNewRule4 and NewRule5 use the variable eventType to
distinguish between the event types. The NewRule6 event type uses a custom
variable and compares it with the standard event type.
[nz@nzhost ~]$ nzevent add -eventType custom1 -name NewRule4
-notifyType email -dst myemail@company.com -msg "NewRule4 message"
-eventArgsExpr ’$eventType==RandomCustomEvent’
What to do next
Consider creating a script that runs the nzevent generate command as needed
when your custom events occur.
These events occur when the system is running. The typical states are
v Online
v Pausing Now
v Going Pre-Online
v Resuming
v Going OffLine Now
v Offline (now)
v Initializing
v Stopped
The Failing Back and Synchronizing states apply only to z-series systems.
The following is the syntax for the template event rule NPSNoLongerOnline:
-name NPSNoLongerOnline -on no -eventType sysStateChanged
-eventArgsExpr ’$previousState == online && $currentState != online’
-notifyType email -dst ’you@company.com’ -ccDst ’’ -msg ’NPS system
$HOST went from $previousState to $currentState at $eventTimestamp
$eventSource.’ -bodyText ’$notifyMsg\n\nEvent:\n$eventDetail\n’
-callHome yes -eventAggrCount 1
The valid values for the previousState and currentState arguments are:
initializing pausedNow syncingNow
initialized preOnline syncedNow
offlining preOnlining failingBack
offliningNow resuming failedBack
offline restrictedResuming maintaining
offlineNow stopping maintain
online stoppingNow recovering
restrictedOnline stopped recovered
pausing stoppedNow down
pausingNow syncing unreachable
paused synced badState
For more information about states, see Table 5-4 on page 5-10.
In other cases, such as SPU failures, the system reroutes the work of the failed SPU
to the other available SPUs. The system performance is affected because the
healthy resources take on extra workload. Again, it is critical to obtain service to
replace the faulty component and restore the system to its normal performance.
The errString value contains more information about the sector that had a read
error:
v The md value specifies the RAID device on the SPU that encountered the issue.
v The sector value specifies which sector in the device has the read error.
v The partition type specifies whether the partition is a user data (DATA) or
SYSTEM partition.
v The table value specifies the table ID of the user table that is affected by the bad
sector.
If the system notifies you of a read sector error, contact IBM Netezza Support for
assistance with troubleshooting and resolving the problems.
If you enable the HwNeedsAttention event rule, the system generates a notification
when it detects conditions that can lead to problems or that serve as symptoms of
possible hardware failure or performance impacts.
The following table lists the arguments to the HardwareNeedsAttention event rule.
Table 8-9. HardwareNeedsAttention event rule
Arguments Description Example
hwType The type of hardware affected spu
hwId The hardware ID of the component 1013
that has a condition to investigate
location A string that describes the physical
location of the component
errString If the failed component is not
inventoried, it is specified in this
string.
devSerial Specifies the serial number of the 601S496A2012
component, or Unknown if the
component has no serial number.
The following table lists the arguments to the HardwarePathDown event rule.
Table 8-10. HardwarePathDown event rule
Arguments Description Example
hwType For a path down event, the SPU that SPU
reported the problem
hwId The hardware ID of the SPU that 1013
loses path connections to disks
location A string that describes the physical First Rack, First SPA, SPU in third
location of the SPU slot
errString If the failed component is not Disk path event:Spu[1st Rack, 1st
inventoried, it is specified in this SPA, SPU in 5th slot] to Disk [disk
string. hwid=1034
sn="9WK4WX9D00009150ECWM"
SPA=1 Parent=1014 Position=12
Address=0x8e92728
ParentEnclPosition=1 Spu=1013]
(es=encl1Slot12 dev=sdl major=8
minor=176 status=DOWN)
If you are notified of hardware path down events, contact IBM Netezza Support
and alert them to the path failure or failures. It is important to identify and resolve
the issues that are causing path failures to return the system to optimal
performance as soon as possible.
Message Details
If you receive a path down event, you can obtain more information about the
problems. This information might be helpful when you contact Netezza Support.
To see whether there are current topology issues, use the nzds show -topology
command. The command displays the current topology, and if there are issues, a
WARNING section at the end of the output.
Related concepts:
“System resource balance recovery” on page 5-17
Hardware restarted
If you enable the event rule HardwareRestarted, you receive notifications when
each SPU successfully restarts (after the initial startup). Restarts are usually related
to a software fault, whereas hardware causes can include uncorrectable memory
faults or a failed disk driver interaction.
You can modify the event rule to specify that the system include the device serial
number, its hardware revision, and firmware revision as part of the message,
subject, or both.
The following table describes the arguments to the HardwareRestarted event rule.
Table 8-11. HardwareRestarted event rule
Arguments Description Example
hwType The type of hardware affected spu
hwId The hardware ID of the regen source 1013
SPU having the problem
spaId The ID of the SPA A number 1 - 32
spaSlot The SPA slot number Usually a slot number from 1
to 13
devSerial The serial number of the SPU 601S496A2012
devHwRev The hardware revision 7.21496rA2.21091rB1
devFwRev The firmware revision 1.36
Related concepts:
“Event email aggregation” on page 8-14
The following table lists the arguments to the DiskSpace event rules.
Table 8-12. DiskSpace event rules
Arguments Description Example
hwType The type of hardware affected spu, disk
hwId The hardware ID of the disk that has 1013
the disk space issue
spaId The ID of the SPA
spaSlot The SPA slot number
partition The data slice number 0,1,2,3
threshold The threshold value 75, 80, 85, 90, 95
value The actual percentage full value 84
After you enable the event rule, the event manager sends you an email when the
system disk space percentage exceeds the first threshold and is below the next
threshold value. The event manager sends only one event per sampled value.
For example, if you enable the event rule Disk80PercentFull, which specifies
thresholds 80 and 85 percent, the event manager sends you an email when the disk
space is at least 80, but less than 85 percent full. When you receive the email, your
actual disk space might be 84 percent full.
The event manager maintains thresholds for the values 75, 80, 85, 90, and 95. Each
of these values (except for 75) can be in the following states:
Armed
The system has not reached this value.
Disarmed
The system has exceeded this value.
Fired The system has reached this value.
Rearmed
The system has fallen below this value.
Note: If you enable an event rule after the system reached a threshold, you are not
notified that it reached this threshold until you restart the system.
After the IBM Netezza System Manager sends an event for a particular threshold,
it disarms all thresholds at or below that value. (So if 90 is triggered, it does not
trigger again until it is rearmed). The Netezza System Manager rearms all
disarmed higher thresholds when the disk space percentage full value falls below
the previous threshold, which can occur when you delete tables or databases. The
Netezza System Manager arms all thresholds (except 75) when the system starts
up.
Tip: To ensure maximum coverage, enable both event rules Disk80PercentFull and
Disk90PercentFull.
To send an email when the disk is more than 80 percent full, enable the predefined
event rule Disk80PercentFull:
nzevent modify -u admin -pw password -name Disk80PercentFull -on
yes -dst jdoe@company.com
If you receive a diskFull notification from one or two disks, your data might be
unevenly distributed across the data slices (data skew). Data skew can adversely
affect performance for the tables that are involved and for combined workloads.
Tip: Consider aggregating the email messages for this event. Set the aggregation
count to the number of SPUs.
Related concepts:
“Data skew” on page 12-10
“Event email aggregation” on page 8-14
The runaway query timeout is a limit that you can specify system-wide (for all
users), or for specific groups or users. The default query timeout is unlimited for
users and groups, but you can establish query timeout limits by using a system
default setting, or when you create or alter users or groups. The runaway query
timeout limit does not apply to the admin database user.
The following table lists the arguments to the RunAwayQuery event rule. The
arguments are case-sensitive.
Note: Typically you do not aggregate this event because you should consider the
performance impact of each individual runaway query.
When you specify the duration argument in the -eventArgsExpr string, you can
specify an operator such as: ‘==’, ‘!=’, ‘>’, ‘>=’, ‘<’, or ‘<=’ to specify when to send
the event notification. Use the greater-than (or less-than) versions of the operators
to ensure that the expression triggers with a match. For example, to ensure that a
notification event is triggered when the duration of a query exceeds 100 seconds,
specify the -eventArgsExpr as follows:
-eventArgsExpr ’$duration > 100’
If a query exceeds its timeout threshold and you added a runaway query rule, the
system sends you an email that informs you how long the query ran. For example:
NPS system alpha - long-running query detected at 07-Nov-03, 15:43:49
EST.
sessionId: 10056
planId: 27
duration: 105 seconds
Related concepts:
“Query timeout limits” on page 11-37
You can place a limit on the amount of time a query is allowed to run before the
system notifies you by using the runaway query event. The event email shows
how long the query has been running, and you can decide whether to terminate
the query.
System state
You can also monitor for events when a system is “stuck” in the Pausing Now
state. The following is the syntax for event rule SystemStuckInState:
-name ’SystemStuckInState’ -on no -eventType systemStuckInState
-eventArgsExpr ’’ -notifyType email -dst ’<your email here>’ -ccDst ’’
-msg ’NPS system $HOST - System Stuck in state $currentState for
$duration seconds’ -bodyText ’The system is stuck in state change.
Contact Netezza support team\nduration: $duration seconds\nCurrent
State: $currentState\nExpected State: $expectedState’ -callHome yes
-eventAggrCount 0
It is important to monitor the transition to or from the Online state because that
transition affects system availability.
IBM Netezza sets the thresholds that are based on analysis of disk drives and their
performance characteristics. If you receive any of these events, contact Netezza
Support and have them determine the state of your disk. Do not aggregate these
events. The templates do not aggregate these events by default.
The following is the syntax for the event rule SCSIPredictiveFailure event:
-name ’SCSIPredictiveFailure’ -on no -eventType scsiPredictiveFailure
-eventArgsExpr ’’ -notifyType email -dst ’you@company.com’ -ccDst ’’
-msg ’NPS system $HOST - SCSI Predictive Failure value exceeded for
disk $diskHwId at $eventTimestamp’ -bodyText
’$notifyMsg\n\nspuHwId:$spuHwId\ndisk
location:$location\nscsiAsc:$scsiAsc\nscsiAscq:$scsiAscq\nfru:$fru\nde
vSerial:$devSerial\ndiskSerial:$diskSerial\ndiskModel:$diskModel\ndisk
Mfg:$diskMfg\nevent source:$eventSource\n’ -callHome no
-eventAggrCount 0
The following table lists the output from the SCSIPredictiveFailure event rule.
Table 8-15. SCSIPredictiveFailure event rule
Arguments Description Example
spuHwId The hardware ID of the SPU that owns or
manages the disk that reported the event
diskHwId The hardware ID of the disk 1013
scsiAsc The attribute sense code, which is an Vendor specific
identifier of the SMART attribute
scsiAscq The attribute sense code qualifier of the Vendor specific
SMART attribute
fru The FRU ID for the disk
location The location of the disk
devSerial The serial number of the SPU to which 601S496A2012
the disk is assigned
diskSerial The disk serial number 7.21496rA2.21091rB1
diskModel The disk model number
diskMfg The disk manufacturer
Regeneration errors
If the system encounters hardware problems while it attempts to set up or perform
a regeneration, the system triggers a RegenFault event rule.
The following table lists the output from the event rule RegenFault.
Table 8-16. RegenFault event rule
Arguments Description Examples
hwIdSpu The hardware ID of the SPU that owns or 1013
manages the problem disk
hwIdSrc The hardware ID of the source disk
locationSrc The location string of the source disk
hwIdTgt The hardware ID of the target spare disk
locationTgt The location string of the target disk
errString The error string for the regeneration issue
devSerial The serial number of the owning or reporting
SPU
Note: If you receive a significant number of disk error messages, contact IBM
Netezza Support to investigate the state of your disks.
If you enable the event rule SCSIDiskError, the system sends you an email message
when it fails a disk.
The following table lists the output from the SCSIDiskError event rule.
Table 8-17. SCSIDiskError event rule
Argument Description Examples
spuHwId The hardware ID of the SPU that owns or manages
the disk or FPGA
diskHwId The hardware ID of the disk where the error 1013
occurred
location The location string for the disk
errType The type of error, that is, whether the error is the 1 (Failure), 2 (Failure imminent) 3 (Failure
type failure, failure possible, or failure imminent possible), 4 (Failure unknown)
errCode The error code that specifies the cause of the error 110
In some cases, you might need to replace components such as cooling units (fans,
blowers, or both), or perhaps a SPU.
The following table lists the output from the ThermalFault event rule.
Table 8-18. ThermalFault event rule
Argument Description Examples
hwType The hardware type where the error occurred SPU* or disk enclosure
hwId The hardware ID of the component where the 1013
fault occurred
label The label for the temperature sensor. For the
IBM Netezza Database Accelerator card, this
label is the BIE temperature. For a disk
enclosure, it is temp-1-1 for the first
temperature sensor on the first enclosure.
location A string that describes the physical location of
the component
curVal The current temperature reading for the
hardware component
errString The error message The board temperature
for the SPU exceeded 45
degrees centigrade.
Attention: Before you power on the machine, check the SPA that reported this
event. You might need to replace one or more SPUs or SFIs.
After you confirm that the temperature within the environment returns to normal,
you can power on the RPCs by using the following command. Make sure that you
are logged in as root or that your account has sudo permissions to run the
following command: /nzlocal/scripts/rpc/spapwr.sh -on all.
History-data events
There are two event notifications that alert you to issues with history-data
monitoring:
histCaptureEvent
A problem prevented the history-data collection process (alcapp) from
writing history-data files to the staging area.
histLoadEvent
A problem prevented the history-data loader process (alcloader) from
loading history data into the history database.
The following table describes the output from the histCaptureEvent rule.
Table 8-20. The histCaptureEvent rule
Arguments Description Examples
host The name of the IBM Netezza system that nps1
had the history event
configName The name of the active history fullhist
configuration
storageLimit The storage limit size of the staging area
in MB
loadMinThreshold The minimum load threshold value in MB
loadMaxThreshold The maximum load threshold value in
MB
diskFullThreshold Reserved for future use.
loadInterval The load interval timer value in minutes
nps The Netezza location of the history localhost
database
The following table describes the output from the histLoadEvent rule.
Table 8-21. The histLoadEvent rule
Arguments Description Examples
host The name of the Netezza system that had the nps1
history event
configName Name of the active history configuration
storageLimit The storage limit size of the staging area in
MB
loadMinThreshold The minimum load threshold value in MB
loadMaxThreshold The maximum load threshold value in MB
diskFullThreshold Reserved for future use.
Related concepts:
“History event notifications” on page 14-4
The following table lists the output from the spuCore event rule.
The following table lists the output from the VoltageFault event rule.
Table 8-23. VoltageFault event rule
Argument Description Examples
hwType The hardware type where the error occurred SPU* or disk enclosure
hwId The hardware ID of the component where the 1013
fault occurred
label The label for the nominal voltage sensor. For
example, voltage-1-1 represents the first voltage
sensor in the first disk enclosure. For the
Netezza Database Accelerator card, BIE 0.9V is
an example for the 0.9V nominal voltage.
location A string that describes the physical location of
the component
curVolt Specifies the current voltage of the component.
This value is a string that also includes the
sensor that exceeded the voltage threshold.
errString Specifies more information about the voltage
fault; if the problem component is the Netezza
Database Accelerator card, it is specified in the
string.
txid: 0x4eeba
Session id: 101963
PID: 19760
Database: system
User: admin
Client IP: 127.0.0.1
Client PID: 19759
Transaction start date: 2011-08-30 10:55:08
The number of transaction objects in use can drop by the completion of active
transactions, but if the problem relates to older transactions that have not been
cleaned up, you can abort the oldest session. In addition, you can use the
nzsession -activeTxn command to identify the active transactions. You can
identify and abort the older transactions as necessary to free the transaction
objects. (You can also stop and restart the IBM Netezza software to clean up the
transactions.)
Note: The notification repeats every three hours if the object count remains above
90 percent, or when the object count drops below 85 percent but later reaches
59,000 again.
The following table lists the output of the transaction limit event.
Table 8-24. TransactionLimitEvent rule
Argument Description Examples
curNumTX Specifies the current number of transaction
objects which are in use.
Availability event
If a device is unavailable, the most common reasons are that the device is no
longer operating normally and has transitioned to the Down state, it was powered
off, or it was removed from the system. Investigate to determine the cause of the
availability issue and take steps to replace the device or correct the problem.
Reachability event
A device is unreachable when it does not respond to a status request from its
device manager. A device can be unreachable for a short time because it is busy
and cannot respond in time to the status request, or there might be congestion on
the internal network of the system that delays the status response. The system
manager now detects extended periods when a device is unreachable and logs an
event to notify you of the problem. The sysmgr.reachabilityAlertTime setting
specifies how long the device manager waits for status before it declares a device
to be unreachable. The default value is 300 seconds. When the timeout expires, the
system manager raises a HW_NEEDS_ATTENTION event to notify you of the
problem.
If a device is unreachable, the most common reasons are that the device is busy
and cannot respond to status requests, or there might be a problem with the
device. If the device is temporarily busy, the problem usually clears when the
device can respond to a status request.
NPS system nzhost - Topology imbalance event has been recorded at 15-
Jul-12, 08:36:07 EDT System initiated.
For systems that use an older topology configuration, you could encounter
situations where the event is triggered frequently but for a known situation. In that
event, you can disable the event by setting the following registry value. You must
pause the system, set the variable, and then resume the system.
[nz@nzhost ~]$ nzsystem set -arg
sysmgr.enableTopologyImbalanceEvent=false
The Network Interface State Change event sends an email notification when the
state of a network interface on a SPU has changed.
The new event is not available as an event template in Release 5.0.x. You must add
the event by using the following command:
[nz@nzhost ~]$ nzevent add -name SpuNetIfChanged -eventType
nwIfChanged -notifyType email -msg ’A network interface on a SPU has
changed states.’ -dst <your email here>
The numCpuCoreChanged event notifies you when a SPU CPU core goes offline
and the SPU is operating at a reduced performance. You can add the event by
using a command similar to the following:
nzevent add -name SpuCpuCoreChangeEvent -eventType numCpuCoreChanged
-notifyType email -msg "Num Core Changed" -dst <email_id> -bodyText
’\n Hardware id = $hwId\n Location = $location\n Current number of
cores = $currNumCore\n Changed number of cores = $changedNumCore’
If a SPU has a core failure, the system manager also fails over that SPU.
Display alerts
If the NzAdmin tool detects an alert, it displays the Alert entry in the navigation
list. The NzAdmin tool displays each error in the list and indicates the associated
component. The Component, Status, and other columns provide more information.
For the hardware alerts, the alert color indicator takes on the color of the related
component. If, however, the component is green, the NzAdmin tool sets the alert
color to yellow.
v To view the alerts list, click the Alerts entry in the left pane.
v To get more information about an alert, double-click an entry or right-click and
select Status to display the corresponding component status window.
v To refresh alerts, select View > Refresh or click the refresh icon on the toolbar.
The callhome service is an optional feature that improves and simplifies the work
to report the most common types of problems that could occur on the appliance. If
you enable callhome, the service watches your system for issues such as failed
SPUs/S-Blades or disks, host failovers, unexpected system state changes, or SPU
core files. When it detects a problem, the callhome service automatically gathers
the basic information for the problem, and contacts IBM Support with information
about the problem.
The callhome service contacts the automated IBM Support servers, which open a
PMR for your system that is then assigned to a Support engineer for investigation.
The PMR is identical to a PMR that you create when you report problems for your
Netezza appliance. The callhome service ensures that the PMR contains the
required information about the system and problem. You can review and update
the PMR with more information and follow its progress as you would for any
PMR that you create. The customer contacts that you designate also receive an
email with the PMR number.
The log files and diagnostics are the same files that IBM Support engineers request
when first addressing an issue. Since the callhome service includes these files in
the automated notification, the PMR moves more quickly to analysis and
resolution. If necessary, the Support engineer could contact your site administrators
for more information as needed, but the initial information is already part of the
callhome problem report.
Note: Callhome does not support the IBM Netezza 100 (Skimmer®) models, the
IBM Netezza High Capacity Appliance C1000 appliances, or the IBM Netezza
Platform Development Software installations.
The callhome service uses the IBM PureData System for Analytics appliance to
have Hypertext Transfer Protocol Secure (HTTPS)/Simple Object Access Protocol
(SOAP) access and Simple Mail Transport Protocol (SMTP) in the data center. If
you do not allow HTTPS/SOAP connections from the data center, you can
configure callhome to use SMTP notifications to report events to the IBM Support
servers. If HTTPS/SOAP or SMTP access is not available or supported in your data
center, you cannot use the callhome service.
Important: The callhome service is not required. If you do not enable callhome,
you can still open PMRs by using the standard IBM Support processes. If you
choose to enable callhome, you can configure and enable the service yourself, or
you can obtain assistance from IBM Support to configure the callhome service on
Set up callHome.txt
You use the callHome.txt file to specify important contact and service information
about your IBM Netezza system.
Make sure that you gathered the required information for your system including
model information, administrative contacts, and system location information.
Procedure
1. Log in to the active host of the IBM Netezza system as the nz user.
2. Change to the /nz/data/config directory. Use caution when working in the
/nz/data directory or its subdirectories. Any unintentional changes or deletions
in this directory area can impact the performance and data on your system.
3. Make a backup copy of the callHome.txt file using the following command:
cp callHome.txt callHome.txt.bk
4. Using a text editor such as vi, edit the callHome.txt file.
A sample of a callHome.txt file follows. Your file will also have some content
at the bottom to show update history.
# callHome.txt
#
# Call home attachment file containing installation-specific attributes.
customer.company =
customer.address1 =
customer.address2 =
customer.address3 =
customer.ICN =
contact2.name =
contact2.phone =
contact2.email =
contact2.cell =
contact2.events =
Important: To enable callhome support, you must specify values for the
customer.company, customer.address1, customer.address2, and customer.ICN
fields; at least one complete customer contact block with name, phone, email,
and an event option; and the system block including model, MTM, serial, and
Example
contact2.name =
contact2.phone =
contact2.email =
contact2.cell =
contact2.events = ALL
What to do next
As a best practice, you can confirm that the changes to the callHome.txt file are
correct and complete by using the nzcallhome -validateConfig command. The
command checks the callHome.txt file and logs any problems such as fields that
are incorrectly specified or if required fields are missing. You can then correct any
problems in the file, save the file, and retry the status command. For example, if
the configuration file is complete, the command logs the following message:
[nz@nz10065 ~]$ nzcallhome -validateConfig -debug
CallHome configuration is good.
If the validation process finds an issue in the callHome.txt file, it logs an error
message. For example:
[nz@nz10065 ~]$ nzcallhome -validateConfig -debug
Tue Aug 19 10:41:58 EDT 2014 INFO: NzCallHome called with args: -validateConfig
Tue Aug 19 10:41:58 EDT 2014 SEVERE: The required field "system.MTM" is missing
or not configured with a
valid machine type and model. Valid format should be "MACH/MOD".
Within the callHome.txt file, you can define optional custom.field1 and
custom.field2 values to create unique strings within the callhome notification
emails. These fields can help users to filter the notification emails using email rules
that they have defined, or to include more easily identifiable information in the
email subject, such as a hostname for the source IBM Netezza system that reported
the problem.
You can specify up to 10 characters in the custom fields. Insert the fields after the
system.* fields and before the comment history at the bottom of the file. An
example of a callHome.txt file with the custom fields follows:
system.description = Acme Analytics System
system.location = 1st Floor Data Center, row 2, rack 5
system.model = N2001-010
system.MTM = 3565/EG1
system.serial = NZ30123
system.CC = US
system.modemNumber =
custom.field1 = nz80533-h1
custom.field2 = Shared
If you do not define custom fields, the email subject includes the first 10 characters
of the hostname and system.description field, if defined, from the callHome.txt
file.
Make sure that you have the required information for your email server and the
source email address. You may need to contact your data center IT team to obtain
the email server information for your environment. Most data centers have strict
policies about email access and use. If your environment has policies that prevent
email from the data center, you may not be able to use the callhome service.
Contact IBM Netezza Support for more information about the email policies and
options.
Procedure
1. Log in to the active host of the IBM Netezza system as the nz user.
2. Change to the /nz/data/config directory. Use caution when working in the
/nz/data directory or its subdirectories. Any unintentional changes or deletions
in this directory area can impact the performance and data on your system.
3. Using a text editor such as vi, edit the sendMail.cfg file.
A sample of a default, empty sendMail.cfg file follows. Your file could have
some content already if it was previously updated by IBM installers or Support,
or an administrator at your site.
# sendMail.cfg
#
# Configuration parameters for sendMail program.
mailServer.name = mail1.netezza.com
mailServer.port = 25
#login.username =
#login.password =
#login.method =
# Sender information
# Other
# Note: Valid separators between multiple mail addresses are ’,’ or ’;’
cc =
4. Type the information for your mailServer.name and mailServer.port, and the
sender.name and sender.address. For the sender.name, specify a unique client
or system name to clearly identify the email that is sent from your system. You
could include the system hostname or other unique client name. Do not include
any punctuation marks or special characters in the sender.name because they
could be interpreted incorrectly by the mail handler. The sender.address value
must point to a valid internet domain. The sender might not exist, but the
internet domain in the address (for example, mycomp.com) must be reachable
through SMTP on the internet. The address cannot be an internal (private)
domain within the customer intranet. Internal addresses that are not reachable
by SMTP could be rejected by spam filters.
5. Save your changes to the sendMail.cfg file and close the file.
Results
The callhome service now has a configured email server and information for
sending the notification emails.
Example
mailServer.name = mymailsrv.corp.com
mailServer.port = 25
# Sender information
The HTTPS/SOAP method for callhome also sends email to your customer
contacts when a PMR has been created. Make sure that you consult with your IT
security and networking team about the corporate firewall support for the
outbound SMTP and port 443 HTTPS traffic. If there are questions about the
callhome support, or if the callhome test messages do not appear to be working,
contact IBM Support for assistance.
Before you enable the callhome service, make sure that the callHome.txt file and
sendMail.cfg file are configured with your system, contact, and email information.
Results
The command checks the callHome.txt file to verify that the required information
is complete and then enables the callhome service. If there are errors or missing
information in the callHome.txt file, the command logs error messages to the
/nz/kit/log/nzCallHome/nzCallHome.0.log that you can use to fix the file and then
re-try the -on command. For example, if the callHome.txt file does not include a
complete customer contact block, the command logs the following error:
A completely configured customer contact block was not found. At least one contact
needs to be configured with phone, email and one of the event levels to notify that
contact on - ( Hardware | DBA | ALL ).
The next step is to configure the Netezza event rules used for the callhome
monitoring. If the callhome event rules are already present on the system and
enabled, the monitoring processes are activated.
The topics in this section frequently use the -debug to show more verbose output
messages.
After callhome is running and operating as expected, you can disable the extra
debug logging information if you no longer require the details. To disable the
debug mode, use the nzcallhome -disableDebug.
The callhome service contacts the automated IBM Support servers, which open a
PMR for your system that is then assigned to a Support engineer for investigation.
To activate the PMR notifications, you must enable the service using the
nzcallhome -enablePmr command.
By default, callhome uses the HTTPS/SOAP protocol to contact the servers and
open the PMR. If you cannot or do not want to use the HTTPS/SOAP protocol to
open PMRs, you must run an additional nzcallhome -enableEmailOnly command
to configure callhome to open the PMR via email notifications. For more
information, see “Enable email-only PMR notification” on page 9-11.
Note: When you use the nzcallhome -enablePmr command, the callhome service
configures and enables the callhome event rules and then enables the PMR
notification feature. When you use the -disablePmr option, the callhome service
removes the callhome event rules.
The callhome service can collect and report health and status of the IBM Netezza
system and send it to IBM Support. The information consists of non-confidential
details such as the following:
v Performance status
v Capacity status
v Disk duty cycles
v I/O rates and capacities
v Client success metrics such as the number of queries completed
To activate the status reporting feature, you must enable the service using the
nzcallhome -enableStatus command. You can disable the service using nzcallhome
-disableStatus.
Note: If you have enabled status reporting and your IBM Netezza system is
running the System Health Check v2.3 or later software, the health check services
automatically collect and report status on a daily basis.
To activate the inventory reporting feature, you must enable the service using the
nzcallhome -enableInventory command. You can disable the service using
nzcallhome -disableInventory.
Note: If you have enabled inventory reporting and your IBM Netezza system is
running the System Health Check v2.3 or later software, the health check services
automatically collect and report the inventory on a weekly basis.
Procedure
1. Log in to the active host of the IBM Netezza system as the nz user.
2. Run the following command to display the callhome status.
[nz@nzhost nz]$ nzcallhome -status -debug
NzCallHome: operational state is ’enabled’
NzCallHome and in debug mode
NzCallHome: PMR generation enabled
NzCallHome: Status generation enabled
NzCallHome: Inventory generation enabled
Results
When you configure the callhome monitoring rules, the command creates the
following rules in the event manager. When a problem condition triggers the rule,
the rule calls the nzcallhome command to send the automatic notifications to IBM
Support.
Table 9-2. Callhome event rules
Event rule name Description
diskMonitorPredictive The Disk Monitor Predictive event is a custom event that
reports issues found in disks based on the predictive
monitoring rules in System Health Check. These errors
usually indicate a disk that is failing or requires investigation
or replacement.
hwNeedsAttentionAuto The Hardware Needs Attention event reports problems that
include replacement disks with invalid firmware, storage
configuration changes, unavailable/unreachable components,
disks that reach a grown defects early warning threshold,
Ethernet switch ports that are down, and other conditions
that are early warnings of problems that affect system
behavior or the ability to manage devices within the system.
hwPathDownAuto The Path Down event reports situations where the path
between an S-Blade/SPU and its disks has failed. Failed
paths adversely affect system and query performance
because the storage processing workloads are not balanced
within the system. It is important to identify and resolve the
issues that are causing path failures to return the system to
optimal performance as soon as possible.
hwServiceRequestedAuto The Service Requested event triggers when a hardware
component fails so that IBM Support can notify service
technicians that can replace or repair the component. Many
components are redundant, so a failure typically activates a
spare component or redistributes work to the remaining
healthy components. It is important to replace a failed
component quickly so that you restore normal operation and
performance, and so that the system has its full complement
of spares and component redundancy.
hwVoltageFaultAuto The Voltage Fault event monitors the voltages and power for
the SPUs and disk enclosures, and reports cases when the
voltage is outside the normal operating range.
regenFaultAuto The Regen Fault event reports hardware problems that occur
when the system is setting up or performing a disk
regeneration. This event is an important warning because the
system could have a situation where user data is stored on
only one disk and there is no redundant copy of the data. If
the one primary disk is lost to failure, the user data stored
on that disk could be lost.
The severity is the expected problem severity level for this type of condition. If the
Distributed Replicated Block Device (DRBD) cluster status indicates that the
Netezza system is not in a healthy cluster mode, the severity level increases to 2
because the Netezza appliance could go offline if it encounters any problem
condition that requires a host failover. In addition, if the current system state is
Down at the time of the failure, the severity increases to 1 because the system is
offline.
You can filter the events that trigger a callhome notification by defining rules in
the /nz/data/config/eventBlacklist.txt file. The filter rules define event
conditions that should not be processed by callhome, which can help to reduce
undesired notifications to IBM Support and to your configured customer contacts
list.
When a callhome event rule is triggered, the callhome service checks the
eventBlacklist.txt file to verify that the event is problem condition that should
be processed and sent as a notification to IBM Support. You can create filter rules
that use any of the following formats:
v rule1.eventType = spuCore
v rule1.hwType = SPU
v rule1.errString = my error string
For the eventType filter, specify one of the nzEvent types which are configured for
Call Home such as hwNeedsAttention, hwServiceRequested, hwPathDown,
hwVoltageFault, regenFault, scsiDiskError, scsiPredictiveFailure, spuCore,
sysStateChanged, or topologyImbalance.
For the hwType filter, you can specify any of the hardware type values. To list the
hardware type values, use the nzhw listTypes command.
For the errString value, you can specify a unique string as it would appear in the
errString output of the event rule.
To create the blocking list of filters, define the rules in a text file and save the file
as /nz/data/config/eventBlacklist.txt. After you create the file, callhome checks
the filter rules before it processes notifications for any of the events that trigger on
the appliance.
If callhome ignores an event because of a matching filter, the callhome service logs
a message in the /nz/kit/log/nzCallHome/nzCallHome.0.log file to capture
information about the event that was skipped. You can edit the filter rules at any
time, and you can rename or delete the eventBlacklist.txt file to stop applying
all of the defined filter rules during callhome processing.
When one of the problem conditions that callhome monitors is triggered, the
callhome service gathers the following types of information from the appliance.
The information is compressed into a .tgz file that has the name convention
The callhome service must be enabled to run the command, and PMR service must
also be enabled. The event rules must be configured and enabled to test the event
notification process. If the event rules are not configured or enabled, the command
runs and displays event information, but notifications are not sent for the tests.
Procedure
1. Log in to the active host of the IBM Netezza system as the nz user.
2. Run the nzcallhome -generatePmr <options> command.
If you specify -generatePmr without any options, the command triggers a test
of all the monitored event rules. You can specify one or more of the following
options in a space-separated list:
diskMon
Generate disk_monitorPredictive events.
clusterSsc
Generate Cluster Fail Over event from a sysStateChanged event.
scsiDISKA
Generate scsiDiskError for DISK 'A'
spuCore
Generate spuCore (will evaluate last core present, if any.)
scsiPredDISKB
Generate scsiPredictiveFailure for DISK 'B'
hwVoltSPU
Generate hwVoltageFault for SPU.
Results
The command triggers the requested event or events as a test. The command
requires a minute or two to complete, especially if you supply more than one event
rules to generate as tests.
If callhome is not enabled when you run the -generatePmr command, the
command displays the following message:
NzCallHome: operational state is disabled.
Run nzcallhome -on to enable.
If the PMR service is not enabled when you run the -generatePmr command, the
command displays the following message:
PMR generation is disabled.
Example
The following example command triggers a test SPU hardware voltage event.
[nz@nzhost nz]$ nzcallhome -generatePmr hwVoltSPU
invoked [HwVoltageSpu: /nz/kit/bin/nzevent generate -eventType hwVoltageFault
-eventArgs label=AC/IN,hwType=spu,hwId=1017,location=spa1.spu1,devSerial=8273649817,
curVolt=104,errString=’THIS_is_ONLY_a_TEST.’]
The following example command triggers a test SCSI disk and hardware disk
event.
[nz@nzhost nz]$ nzcallhome -generatePmr scsiDISKA hwDISKB
invoked [ScsiDiskError: /nz/kit/bin/nzevent generate -eventType scsiDiskError
-eventArgs spuHwId=1017,diskHwId=1022,location=,errType=5,errCode=69,oper=4,
dataPartition=0,lba=1234124,tableId=102456,dataSliceId=23,block=12345,fpgaEngineId=4,
fpgaBoardSerial=98234fwe133,devSerial=ThisIsOnlyATEST,diskModel=someModel,
diskMfg=IBM-EXSX,errString=’THIS_is_ONLY_a_TEST.’]
invoked [HwDISKB: /nz/kit/bin/nzevent generate -eventType hwServiceRequested
-eventArgs hwType=disk,hwId=1023,location=spa1.diskEncl1.disk3,devSerial=
ThisIsOnlyATEST,errString=THIS_is_ONLY_a_TEST]
See “Sample callhome email” on page 9-17 for an example of the email that is
created when you generate events for an enabled event rule.
See “Callhome processing verification (email only)” on page 9-17 for more
information about confirming that the callhome event completed successfully and
how to troubleshoot problems.
To test the call home processing, generate a test PMR to verify that the services are
working:
[nz@nzhost nz]$ nzcallhome -generatePmr hwSPU
If call home is working and the connections to the IBM servers are open, you
should see a message similar to the following in the log file:
Tue Feb 24 09:57:17 EST 2015 INFO: srid = xxxxxyyyzzz
Tue Feb 24 09:57:49 EST 2015 INFO: The NzCallHome forensics and report generation
has completed
The xxxxxyyyzzz value is the number of the PMR (service request) created by the
command.
When the callhome events detect a condition that triggers an event, the callhome
feature sends an email to IBM Support and to the contacts listed in callHome.txt.
After the PMR is opened, the IBM Support server sends an automatic email to
your designated administrators in callHome.txt. You can use the emails for the
problem report and the resulting PMR number as indicators that the callhome
processes are reporting problems detected on your system.
If you are enabling the callhome feature, but you are not a member of that
callHome.txt notification list, it might be difficult to confirm that the email
notifications are working as expected. Follow these steps to review the common
error logs for indications of possible problems. If you do not see any issues in the
log files, the callhome notification process completed. Otherwise, use the messages
to identify the problems and troubleshooting steps.
v Review the sendMail logs in the /nz/kit/log/sendMail directory that are time
stamped with the time of the generate command. Confirm that the log file
messages indicate that the email was sent.
v Review the nzOpenPmr log in the /nz/kit/log/nzOpenPmr directory. Verify that
there are no errors in the log file messages.
v As the root user, confirm that there are no related errors or issues posted to
/var/log/messages file.
A sample email follows. The sample was edited to be generic for the
documentation.
Appliance Location
----------------------------------------------------------------------
ICN: 1234567 CC US
Company: Acme Corporation
Address: 26 Main St
Marlborough, MA
Appliance Identification
----------------------------------------------------------------------
MTM: 3563/GDO N1001-020 IBM PureData System for Analytics N1001-020
Serial#: NZ8012345
NzId: NZ8012345
HostName: nzhost.acme.com
Use: Acme Analytics System
Location: 1st Floor Data Center, last row rack
Appliance Status
----------------------------------------------------------------------
Nz State: online
Up Time: 22:38:1 sec
Host DRBD: 0 issues (of 2)
Hardware 0 issues:
Compliment SPAs 4, SPUs 24, Disks 192
DataSlices: 0 issues (of 184)
Topology: ok
Regens: 0 issues
Incident:
09:44:16 est\neventargs: hwid = 1017
coreData = HASH(0x1a310a0)
crash-version = 4.1.2-8.el5 WARNING!
detailtext = eventType:_spuCore
eventTimestamp:_04-Dec-13
errstring = THIS_is_ONLY_a_TEST.
eventts = 04-Dec-13
Context:
eventmgrlines = 3
regenStatus = 0
sysmgrlines = 13
The callhome service must be enabled to run the command, and the inventory
and/or status reporting services must also be enabled. The following example
commands generate test reports:
[nz@nzhost nz]$ nzcallhome -generateInventory
[nz@nzhost nz]$ nzcallhome -generateStatus
If you want to generate sample reports for your review, but do not want to send
the reports to IBM Support, you can use the additional -dataInspection option to
skip the transmission of the report. For example:
[nz@nzhost nz]$ nzcallhome -generateInventory -dataInspection -debug
The request is sent to IBM Support and directed to the team that schedules
software upgrades. The request includes basic information about your appliance
that Support typically requests for upgrade services.
Procedure
1. Log in to the active host of the IBM Netezza system as the nz user.
2. Run the nzcallhome command with the -requestUpgrade option.
[nz@nzhost nz]$ nzcallhome -requestUpgrade
Typically, you disable the callhome service only during software upgrades, service
operations, or troubleshooting tasks when it is possible that the system state could
Procedure
1. Log in to the active host of the IBM Netezza system as the nz user.
2. Run the nzcallhome command with the -off option.
[nz@nzhost nz]$ nzcallhome -off -debug
NzCallHome: disabled operation
Results
The callhome service is stopped, which means that the monitored problem
conditions no longer trigger automatic notifications to IBM Support. The callhome
service stops sending notifications for problems. When you complete the upgrades
or service steps, re-enable the callhome service monitoring using the nzcallhome
-on command to resume normal monitoring and automatic notifications for
problems.
When you disable an event rule, the system does not raise or trigger that event
when the associated condition occurs. You typically disable a rule when you are
debugging system conditions that might cause events to trigger as false alarms and
you want to temporarily stop any notifications. The -disable option turns off all of
the callhome-related events. You could also use the nzEvent command to turn off a
specific event rule that you are testing or for which you want to stop notifications.
Procedure
1. Log in to the active host of the IBM Netezza system as the nz user.
2. Run the nzcallhome command with the -disable option.
[nz@nzhost nz]$ nzcallhome -disable -debug
Disable nzevent rule: /nz/kit/bin/nzevent add -on no -name disk_monitorPredictiveAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name hwNeedsAttentionAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name hwServiceRequestedAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name sysStateChangedAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name regenFaultAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name scsiDiskErrorAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name scsiPredictiveFailureAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name spuCoreAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name hwVoltageFaultAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name topologyImbalanceAuto
Disable nzevent rule: /nz/kit/bin/nzevent modify -on no -name hwPathDownAuto
disabled 11 nzevents.
The command disables or turns off all of the callhome event rules, but the rules are
still defined in the event rule tables. If any of the event conditions occur or if you
generate an event for testing using nzcallhome -generate, the callhome service
does not take any action to collect information about the problem and open a PMR
with the IBM Netezza Support server. After your testing is complete, you can turn
on the event rules using the nzcallhome -enable command.
The callhome service must be enabled before you can remove the event rules. See
“Enable the callhome service” on page 9-8.
Procedure
1. Log in to the active host of the IBM Netezza system as the nz user.
2. Run the nzcallhome -remove command.
[nz@nzhost nz]$ nzcallhome -remove -debug
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name disk_monitorPredictiveAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name hwNeedsAttentionAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name hwServiceRequestedAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name sysStateChangedAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name regenFaultAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name scsiDiskErrorAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name scsiPredictiveFailureAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name spuCoreAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name hwVoltageFaultAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name topologyImbalanceAuto
Remove nzevent rule: /nz/kit/bin/nzevent delete -force -name hwPathDownAuto
remove 11 nzevents.
Results
The command deletes the event rules that were added to monitor the callhome
problem conditions. After the events are removed, the callhome service is not
monitoring any event conditions.
Even if there are no callhome event rules defined when you run the -remove
command, the command displays the same messages in debug mode. To verify
that there are no callhome events rules configured on your system, you can use a
command such as nzevent | grep nzcall to list any callhome-related events. The
command should not return any results after the -remove operation.
If callhome is not enabled when you run the -remove command, the command
displays the following message:
NzCallHome called with args: -remove -debug
NzCallHome: operational state is disabled.
Run nzcallhome -on to enable.
Your IBM Netezza appliance must have the correct revision of the Red Hat
operating system with more libraries and RPMs, and the system must be running
NPS Release 7.1 or later.
v For an IBM PureData System for Analytics N3001 rack appliance (models
N3001-002 through N3001-080), the system must be running NPS release
7.2.0.0-P1 or later, Red Hat 6.5 and Host Management 5.4 or later, which enables
support for the cryptographic libraries and runtimes.
v For an IBM PureData System for Analytics N3001-001 appliance, the system
must be running NPS release 7.2.0.1 or later, Red Hat 6.5 and Host Management
5.4.1 or later, which enables support for the cryptographic libraries and
runtimes.
v For an IBM PureData System for Analytics N200x appliance, the system must be
running Red Hat 6.4 and Host Management 5.2 or later, which enables support
for the cryptographic libraries and runtimes.
v For an IBM PureData System for Analytics N100x appliance (or the earlier
models that are called IBM Netezza 1000 or Netezza TwinFin), the system must
be running Red Hat 5.9 and Host Management 5.2 or later, which enables
support for the cryptographic libraries and runtimes.
Note: The SP 800-131a cryptographic support is not offered on other IBM Netezza
appliance models.
If your system is not running the minimum levels of the NPS, Red Hat, or Host
Management software, contact IBM Netezza Support for more information about
upgrading your system.
Client Prerequisites
If you install and use the NPS client packages to connect to IBM Netezza
appliances that are configured to support the SP 800-131a cryptography standards,
note the following requirements:
v If you use JDBC to access the IBM Netezza appliance, your JDBC client must
have Java Runtime Environment v1.7 or later, which includes SP 800-131a
cryptography support.
As a best practice, upgrade your client systems to the latest NPS client packages.
Older clients can connect to the IBM Netezza SP 800-131a compliant host using any
security level setting except onlySecured. The preferredUnSecured,
preferredSecured, or onlyUnSecured security levels are supported. If the IBM
Netezza appliance is not using SP 800-131a support, all four levels are supported.
The nzconfigcrypto -enable command checks the system to ensure that it meets
the software and operating system prerequisites to support the SP 800-131a
changes, as described in Chapter 10, “Enhanced cryptography support,” on page
10-1. If the prerequisites are complete, the command does the following tasks:
v Sets the enable_crypto_srd_v1 postgresql.conf registry setting to true to enable
support for enhanced cryptography.
After you run the nzconfigcrypto command, you must stop and restart the NPS
software using the nzstop and nzstart commands to activate the SP 800-131a
compliant operation.
Your system must be running the required levels of the NPS, Red Hat, and Host
Management releases, and the NPS software must be started. See Chapter 10,
“Enhanced cryptography support,” on page 10-1 for more information. The
nzconfigcrypto script fails if these prerequisites are not met. When you run the
command, you must specify an existing host key of type AES-256 as input to the
command. If gthe system default host key is already an AES-256 key, you can
specify that key name. For more information about creating and setting a host key,
see the IBM Netezza Advanced Security Administrator's Guide.
Procedure
1. Log in to the active host of the Netezza system as the nz user. In these
examples, the active host name is nzhost1.
2. Run the nzconfigcrypto -enable command and specify the host key name. The
host key must already be defined in your NPS system and must be of type
AES-256. An example command follows:
[nz@nzhost1 ~]$ nzconfigcrypto -HK ks1.key1 -enable
Checking support for crypto standard in NPS
Checking support for crypto standard in OS
Checking for required library
All required libraries found installed
Checking NPS system state
Checking and updating Host Key
Host Key already set
Checking and updating LDAP connection
No LDAP configuration found
Checking and updating Kerberos connection
No Kerberos configuration found
Checking and updating Authentication type
Checking and updating Audit History Configuration
No audit history configuration found
Checking and updating postgresql.conf file
WARNING:
Kerberos conformance with SP800-131a cannot be controlled by the NPS.
Verify that the Kerberos netezza principal will use only the des3-cbc-sha1,
aes128-cts-hmac-sha1-96, or aes256-cts-hmac-sha1-96 encryption types.
This must be configured on your Kerberos KDC.
3. Stop and re-start the NPS software by using the nzstop and nzstart
commands.
4. After the NPS software starts, type the nzsql command to log in to the system
database as the admin user
[nz@nzhost1 ~]$ nzsql
Welcome to nzsql, the IBM Netezza SQL interactive terminal.
SYSTEM.ADMIN(ADMIN)=>
5. Confirm that the host key is now set to the stronger key that you specified in
the nzconfigcrypto command:
SYSTEM.ADMIN(ADMIN)=> SHOW SYSTEM DEFAULT HOSTKEY;
NOTICE: ’HOST KEY’ = ’KS1.KEY1’
SHOW VARIABLE
6. If you use LDAP authentication for your database user accounts, type the
following command to restore the LDAP configuration with the enhanced
cryptographic support:
SYSTEM.ADMIN(ADMIN)=> SET AUTHENTICATION ldap ssl ’on’ attrname ’cn’
base ’dc=netezza,dc=com’ namecase ’lowercase’ server ’yourldapsvr.company.com’
version ’3’
7. If you use Kerberos authentication for your database user accounts, type the
command that was displayed in the message output from the nzconfigcrypto
-enable command earlier in this procedure to enable the Kerberos configuration:
SYSTEM.ADMIN(ADMIN)=> SET AUTHENTICATION kerberos kdc ’mykdc.com’ realm ’MYREALM.COM’;
Updating /nz/data.1.0/config/krb5.conf and other files.
Re-log-in or open a new shell for changes to take effect.
SET VARIABLE
8. If you had an audit history configuration that was disabled by the script, you
can update the history configuration to digitally sign it using a
DSA_KEYPAIR_2048 key as in the following sample configuration named
audit1:
10-4 IBM Netezza System Administrator’s Guide
SYSTEM.ADMIN(ADMIN)=> ALTER HISTORY CONFIG audit1 KEY ks1.seckey;
After you alter the audit configuration, you can make it the current
configuration to enable that history collection. After you change a history
configuration, you must set the new configuration to be the current one, and
then stop and restart the NPS software by using the nzstop and nzstart
commands to fully enable the audit configuration.
Results
The nzconfigcrypto -enable command verifies that the system can support
enhanced cryptography and enables the SP 800-131a support on the Netezza
appliance. The command creates a log file named /tmp/crypto_date_time.log to
capture the messages and information for later review and troubleshooting.
IBM Netezza appliances does not allow connections that use the MD5 or CRYPT
authentication types. You must drop these non-compliant connections and redefine
them to use SHA256 authentication.
Results
After you updated your connections, you can enable the crypto support with the
nzconfigcrypto command and start the NPS software by using the nzstart
command. If you have any md5 or crypto connections defined, the nzstart
command fails. You must disable crypto support and use the procedure in this
topic to update your connections, then re-enable crypto support, and restart the
NPS software.
Typically, you would not disable cryptography support unless you were evaluating
and testing the enhanced cryptography support and have decided not to use it, or
if you are troubleshooting a configuration problem that is preventing you from
starting the NPS software. You can disable cryptography support temporarily to
start the NPS software, investigate and debug the problem, and then you can
re-enable the cryptography support and restart the NPS software.
Procedure
1. Log in to the Netezza appliance as the nz user.
2. Run the following command:
[nz@nzhost1 ~]$ nzconfigcrypto -disable
Checking support for crypto standard in NPS
Checking and updating postgresql.conf file
Successfully updated parameter enable_crypto_std_v1
Results
Optionally, you can verify that the support is disabled by running the following
command to confirm that the variable has a value of false:
[nz@nzhost1 ~]$ grep crypto /nz/data/postgresql.conf
# enable (crypto) keys
enable_crypto_std_v1 = false
If the value is true, run the nzconfigcrypto -disable command again to disable
the support.
You should verify that the host key value for your system is still correct. The
enhanced crypto support is disabled, but the command does not change the host
key from its current value, which is the AES_256 key that you used for the
enhanced crypto support. To display the current key, connect to a database and run
the following command:
NEWDB.MYUSR(MYUSR)=> SHOW SYSTEM DEFAULT HOSTKEY;
NOTICE: ’HOST KEY’ = ’KS1.KEY1’
SHOW VARIABLE
If you want to change the host key to another value, use the SET SYSTEM
DEFAULT HOSTKEY TO name command.
Downgrades are typically performed by IBM Netezza Support in cases where there
is a need to return to a release that ran previously on your IBM Netezza appliance.
If you are downgrading to a release before 7.1, those releases do not support the
enhanced cryptography for SP 800-131a compliance. The nzupgrade command,
which upgrades and downgrades the current release on the Netezza system,
checks for enhanced crypto support and returns an error if the command detects
any crypto-related objects that are not supported on the release to which you are
downgrading.
To clear your system of the crypto-related objects and settings before a downgrade,
do the following:
v Run the nzconfigcrypto -disable command to disable the crypto support. Stop
and restart the NPS software using the nzstop and nzstart commands to start
the system in a non-SP 800-131a mode.
v Identify and drop any keys that are defined with DSA_KEYPAIR_2048
authentication type. You can use the SHOW KEYSTORE ALL VERBOSE
command to list all the keys. Identify the keys that are of key type
DSA_KEYPAIR_2048 and use the DROP CRYPTO KEY <keystore>.<keyname>
command to drop each of those keys.
v Identify and drop any audit history configurations that are digitally signed with
a DSA_KEYPAIR_2048 key. You can use the SHOW HISTORY
CONFIGURATION ALL command to list all the configurations. Look for audit
history configuration that are digitally signed with a DSA_KEYPAIR_2048 key,
and use the DROP HISTORY CONFIGURATION <histname> command to
remove those configurations. If you loaded audit history data using this stronger
DSA_KEYPAIR_2048 key, that audit database is not viewable after the
downgrade to a release that does not support SP 800-131a cryptography.
v If your system uses LDAP authentication for the Netezza database user
accounts, use the SET AUTHENTICATION LOCAL command to drop the
current LDAP configuration, which is encrypted using the SP 800-131a enhanced
support. You can then re-enable LDAP authentication, which will use the current
host key and authentication levels, and the downgrade will proceed.
v If your system uses Kerberos authentication for the Netezza database user
accounts, use the SET AUTHENTICATION LOCAL command to disable
Kerberos support. NPS releases before 7.1 do not support Kerberos
authentication.
After you make these changes on your system, you can re-try the nzupgrade
command to proceed with the downgrade to the previous release.
You can control access to the Netezza system itself by placing the appliance in a
secured location such as a data center. You can control access through the network
to your Netezza appliance by managing the Linux user accounts that can log in to
the operating system. You control access to the Netezza database, objects, and tasks
on the system by managing the Netezza database user accounts that can establish
SQL connections to the system.
Note: Linux users can log in to the Netezza server at the operating system level,
but they cannot access the Netezza database by SQL. If some of your users require
Linux accounts to manage the Netezza system and database accounts for SQL
access, you can use identical names and passwords for the two accounts to ease
management. Throughout this section, any references to users and groups imply
Netezza database user accounts, unless otherwise specified.
Related concepts:
“Managing access to a history database” on page 14-9
“Resource minimums and maximums” on page 15-10
Related reference:
Appendix B, “Linux host administration reference,” on page B-1
The IBM Netezza appliance has a host server that runs the Linux operating system.
You can assign privileges to a specific database user account as needed. If you
have several users who require similar privileges, you can create user groups to
organize those users and thus simplify access management.
A group can be both a user group and a resource group, but its user group and
resource group aspects, including user group membership and resource group
assignment, are completely separate:
v A user might be assigned to a resource group but not be a member of that
group. That user is unaffected by any privileges or settings of that group, except
for the resource settings.
v A user might be a member of a user group but be assigned to a different
resource group. That user is unaffected by the user group's resource settings.
If a user is a member of more than one group, the user inherits the union of all
privileges from those groups, plus any privileges that were assigned to the user
account specifically. If you remove a user from a user group, the privileges that
were provided by that group are removed from the user. For example, if you
remove a user from a group that has the Create Table privilege, the user loses that
privilege unless the user is a member of another group that grants that privilege or
the user account was granted that privilege directly.
As a best practice, use groups to manage the privileges of your database users
rather than managing user accounts individually. Groups are an efficient and a
time-saving way to manage privileges, even if a group has only one member. Over
time, you typically add new users, drop existing users, and change user privileges
as roles evolve. New Netezza software releases often add new privileges that you
might need to apply to your users. Rather than manage these changes on an
account-by-account basis, manage the privileges with groups and group
membership.
You can create and manage Netezza database accounts and groups by using any
combination of the following methods:
v Netezza SQL commands, which are the most commonly used methods
v Netezza Performance Portal, which provides a web browser interface for
managing users, groups, and privileges
v NzAdmin tool, which provides a windows interface for managing users, groups,
and privileges
This section describes how to manage users and groups by using the SQL
commands. The online help for the Netezza Performance Portal and NzAdmin
interfaces provide more information about how to manage users and groups
through those interfaces.
Related concepts:
Chapter 15, “Workload management,” on page 15-1
The workload of an IBM Netezza system consists of user-initiated jobs such as SQL
queries, administration tasks, backups, and data loads, and system-initiated jobs
such as regenerations and rollbacks. Workload management (WLM) is the process
of assessing a system's workload and allocating the resources used to process that
workload.
Related reference:
Access model
Develop an access model for your IBM Netezza appliance. An access model is a
profile of the users who require access to the Netezza system and the permissions
or tasks that they need.
Typically, an access model begins modestly, with a few users or groups, but it often
grows and evolves as new users are added to the system. The model defines the
users, their roles, and the types of tasks that they perform, or the databases to
which they require access.
Access models can vary widely for each company and environment. As a basic
example, you can develop an access model that defines three initial groups of
database users:
Administrators
Users who are responsible for managing various tasks and services. They
might manage specific databases, manage user access, create databases,
load data, or back up and restore databases.
General database users
Users who are allowed access to one or more databases for querying, and
who might or might not have access to manage objects in the database.
These users might also have lower priority for their work.
Power database users
Users who require access to critical databases and who might use more
complex SQL queries than the general users. These users might require
higher priority for their work. They might also have permissions for tasks
such as creating database objects, running user-defined objects (UDXs), or
loading data.
The access model serves as a template for the users and groups that you create,
and also provides a map of access permission needs. By creating Netezza database
groups to represent these roles or permission sets, you can easily assign users to
the groups to inherit the various permissions, you can change all the users in a
role by changing only the group permissions, and move users from one role to
another by changing their groups, or by adding them to groups that control those
permissions.
The admin database user is the database super-user account. The admin user has
all privileges and access to all database objects. Therefore, use that account
sparingly and only for the most critical of tasks. For example, you might use the
admin account to start creating a few Netezza users and groups; afterward, you
can use another administrative-level account for tasks such as user management,
database maintenance, and object creation and management.
Note: The admin user also has special workload management priority. Because of
the presumed critical nature of the work, it automatically takes half of the system
resources, which can impact other concurrent users and work.
The public group is the default user group for all Netezza database users. All users
are automatically added as members of this group and cannot be removed from
this group. The admin user is the owner of the public group. You can use the
public group to set the default set of permissions for all Netezza user accounts.
You cannot change the name or the ownership of the group.
Related concepts:
“Resource allocations for the admin user” on page 15-15
The admin user account is treated as if it belongs to a resource group that receives
50% of net system resources. Consequently, the admin user has a unique and
powerful impact on resource availability.
“Security model” on page 11-9
The IBM Netezza security model is a combination of administrator privileges that
are granted to users and groups, plus object privileges that are associated with
specific objects (for example, table xyz) and classes of objects (for example, all
tables).
Netezza also supports the option to authenticate database users (except admin) by
using one of the following trusted authentication sources:
v You can use LDAP authentication to authenticate database users, manage
passwords, and manage account activations and deactivations. The Netezza
system then uses a Pluggable Authentication Module (PAM) to authenticate
users on the LDAP name server. Microsoft Active Directory conforms to the
LDAP protocol, so it can be treated like an LDAP server for the purposes of
LDAP authentication.
v You can use Kerberos authentication to authenticate database users, manage
passwords, and manage account activations and deactivations. The Netezza
system uses Kerberos configuration files to connect with the Kerberos key
distribution center (KDC) to authenticate database users before they are allowed
to connect to a database.
The Netezza host supports LDAP or Kerberos authentication for database user
logins only, not for operating system logins on the host. You cannot use both
LDAP and Kerberos to authenticate database users on the Netezza appliance.
You can configure system-wide policies for the minimum requirements for
password length and content. These system-wide controls do not apply to the
default admin database user, only to the other database user accounts that you
create. You can also tailor the pam_cracklib dictionary to establish policies within
your Netezza environment. The pam_cracklib dictionary does not allow common
words, passwords that are based on user names, password reversal, and other
shortcuts that can make passwords more vulnerable to hacking.
You can also configure the Netezza system to check for and enforce new
passwords that do not match a specified number of previous passwords for that
account.
To set the content requirements for passwords, use the SET SYSTEM DEFAULT
SQL command as follows:
SYSTEM.ADMIN(ADMIN)=> SET SYSTEM DEFAULT PASSWORDPOLICY TO conf;
SET VARIABLE
The conf value is a string of parameters that specify the content requirements and
restrictions:
minlen
Specifies the minimum length in characters (after it deducts any credits) for
a password. The default is the minimum value of 6; that is, even with
credits, you cannot specify a password that is less than six characters. If
you specify 10, for example, the user must specify at least nine lowercase
characters (with the lowercase letter default credit of 1) to meet the
minimum length criteria.
For example, the following command specifies that the minimum length of a weak
password is 10, and it must contain at least one uppercase letter. The presence of at
least one symbol or digit allows for a credit of 1 each to reduce the minimum
length of the password:
SYSTEM.ADMIN(ADMIN)=> SET SYSTEM DEFAULT PASSWORDPOLICY TO ’minlen=10,
lcredit=0 ucredit=-1 dcredit=-1 ocredit=1’;
SET VARIABLE
As another example, the following command specifies that the minimum length of
a weak password is 8, it must contain at least two digits and one symbol; and the
presence of lowercase characters offers no credit to reduce the minimum password
length:
SYSTEM.ADMIN(ADMIN)=> SET SYSTEM DEFAULT PASSWORDPOLICY TO ’minlen=8,
lcredit=0 dcredit=-2 ocredit=-1’;
SET VARIABLE
To do this:
1. Log in to the Netezza system as the nz user.
2. Open the /nz/data/postgresql.conf file in any text editor.
3. Search for a entry similar to password_history=n in the file.
v If the file already contains such an entry, ensure that the entry is not
commented out (that is, that # is not the first character in the line) and that
the specified value is a positive integer.
v If the file does not already contain such an entry, create one. The value n
must be greater than or equal to 0.
The specified integer determines the number of the most recent passwords that
cannot be reused.
Note: Be careful not to change other entries, because doing so can have a
negative impact on database operation.
4. Save and exit the postgresql.conf file.
5. Issue the nzstop command to stop the system.
6. Issue the nzstart command to restart the system. This usually requires several
minutes to complete.
After the system restarts and is online, any request to change an account password
will be checked to ensure that the new password has not recently been used for
that account.
If you are using LDAP or Kerberos authentication, you do not have to specify a
password for the account. The CREATE USER command has a number of options
that you can use to specify timeout options, account expirations, rowset limits (the
maximum number of rows a query can return), and priority for the user session
and queries. The resulting user account is owned by the user who created the
account.
You can also unlock an account if it was configured to lock after a specified
number of failed login attempts. To change the account, log in to the IBM Netezza
database by using an account that has Alter administrative privilege. For example,
the following command assigns a user to the group named silver:
SYSTEM.ADMIN(ADMIN)=> ALTER USER dlee WITH IN RESOURCEGROUP silver;
ALTER USER
To drop the account, log in to the IBM Netezza database by using an account that
has Drop administrative privilege. For example, the following command drops the
dlee user account:
SYSTEM.ADMIN(ADMIN)=> DROP USER dlee;
DROP USER
The command displays an error if the account that you want to drop owns objects;
you must change the ownership of those objects or drop those objects before you
can drop the user.
To change the group, log in to the IBM Netezza database by using an account that
has Alter administrative privilege. For example, the following command removes
the member dlee from the group named qa:
SYSTEM.ADMIN(ADMIN)=> ALTER GROUP qa DROP USER dlee;
ALTER GROUP
To drop the group, log in to the IBM Netezza database by using an account that
has Drop administrative privilege. For example, the following command drops the
qa group:
SYSTEM.ADMIN(ADMIN)=> DROP GROUP qa;
DROP GROUP
Security model
The IBM Netezza security model is a combination of administrator privileges that
are granted to users and groups, plus object privileges that are associated with
specific objects (for example, table xyz) and classes of objects (for example, all
tables).
Each object has an owner. Individual owners automatically have full access to their
objects and do not require individual object privileges to manage them. The
database owner, in addition, has full access to all objects within the database. The
admin user owns all predefined objects and has full access to all administrative
permissions and objects. For systems that support multiple schemas in a database,
the schema owner has full access to all objects within the schema.
Related concepts:
“Default Netezza groups and users” on page 11-3
The IBM Netezza system has a default Netezza database user named admin and a
group named public.
When you grant a privilege, the user you grant the privilege to cannot pass that
privilege onto another user by default. If you want to allow the user to grant the
privilege to another user, include the WITH GRANT OPTION when you grant the
privilege.
The following table describes the administrator privileges. The words in brackets
are optional.
Table 11-1. Administrator privileges
Privilege Description
Backup Allows the user to perform backups. The user can run the nzbackup
command.
[Create] Aggregate Allows the user to create user-defined aggregates (UDAs) and to
operate on existing UDAs.
[Create] Database Allows the user to create databases. Permission to operate on
existing databases is controlled by object privileges.
[Create] External Allows the user to create external tables. Permission to operate on
Table existing tables is controlled by object privileges.
[Create] Function Allows the user to create user-defined functions (UDFs) and to
operate on existing UDFs.
[Create] Group Allows the user to create groups. Permission to operate on existing
groups is controlled by object privileges.
[Create] Index For system use only. Users cannot create indexes.
[Create] Library Allows the user to create user-defined shared libraries. Permission
to operate on existing shared libraries.
[Create] Materialized Allows the user to create materialized views.
View
[Create] Procedure Allows the user to create stored procedures.
[Create] Scheduler Allows the user to create scheduler rules and to show, drop, alter,
Rule or set (deactivate or reactivate) any rule, regardless of who created
or owns it.
[Create] Sequence Allows the user to create database sequences.
[Create] Synonym Allows the user to create synonyms.
[Create] Table Allows the user to create tables. Permission to operate on existing
tables is controlled by object privileges.
[Create] Temp Table Allows the user to create temporary tables. Permission to operate
on existing tables is controlled by object privileges.
[Create] User Allows the user to create users. Permission to operate on existing
users is controlled by object privileges.
[Create] View Allows the user to create views. Permission to operate on existing
views is controlled by object privileges.
[Manage] Hardware Allows the user to do the following hardware-related operations:
view hardware status, manage SPUs, manage topology and
mirroring, and run diagnostic tests. The user can run the nzds and
nzhw commands.
Object privileges
Object privileges apply to individual object instances (a specific user or a single
database).
Because object privileges take effect after an object is created, you can only change
privileges on existing objects. Like administrator privileges, object privileges are
granted to users and groups. But where administrator privileges apply to the
system as a whole and are far reaching, object privileges are more narrow in scope.
When an object is created, there are no object privileges that are associated with it.
Instead, the user who creates the object becomes the object owner. Initially, only
the object creator, the schema owner (if the object is defined in a schema), the
database owner, and the admin user can view and manipulate the object. For other
users to gain access to the object, either the owner, database owner, schema owner,
or the admin user must grant privileges to it.
The following table describes the list of available object privileges. As with
administrator privileges, specifying the with grant option allows a user to grant
the privilege to others.
Table 11-2. Object privileges
Privilege Description
Abort Allows the user to abort sessions. Applies to groups and users. For more
information, see “Abort sessions or transactions” on page 12-24.
All Allows the user to have all the object privileges.
Alter Allows the user to modify object attributes. Applies to all objects.
Delete Allows the user to delete table rows. Applies only to tables.
Drop Allows the user to drop all objects.
Execute Allows the user to execute UDFs and UDAs in SQL queries.
Execute Allows the user to change the name of the current user of their session.
As
The following example starts as a local definition and moves to a more global
definition.
If you use SQL commands to manage account permissions, the database to which
you are connected has meaning when you issue a GRANT command. If you are
connected to the system database (this database has the name SYSTEM), the
privilege applies to all databases. If you are connected to a specific database, the
privilege applies only within that database.
Starting in release 7.0.3, you can use a fully-qualified object notation to set the
scope of object privileges from any database. The fully-qualified object notation has
the format:
database.schema.object
Similarly, you could sign into any database to which you have access and grant the
permission using a fully qualified name:
DEV.TEST(USER2)=> GRANT LIST ON mydb.sch1.testdb TO user1;
Similarly, you could sign into any database to which you have access and grant the
permission using a fully qualified name:
DEV.TEST(USER2)=> GRANT SELECT ON MYDB.ALL.TABLE TO user1;
Similarly, you could sign into any database to which you have access and grant the
permission using a fully qualified name:
DEV.TEST(USER2)=> GRANT SELECT ON ALL.ALL.TABLE TO user1;
Note: When you use the GRANT command to grant privileges at a database level
and at the system level, the grant issued for the database overrides the grant
issued at the system level.
Privilege Precedence: IBM Netezza uses the following order of precedence for
permissions:
1. Privileges granted on a particular object within a particular database and a
particular schema, for systems that support multiple schemas
2. Privileges granted on an object class within a particular database and a
particular schema, for systems that support multiple schemas
3. Privileges granted on a particular object within all schemas of a particular
database
You can assign multiple privileges for the same object for the same user. The
Netezza system uses the rules of precedence to determine which privileges to use.
For example, you can grant users privileges on a global level, but user privileges
on a specific object or database level override the global permissions. For example,
assume the following three GRANT commands:
By using these grant statements and assuming that customer is a user table, user1
has the following permissions:
v With the first GRANT command, user1 has global permissions to SELECT, INSERT,
UPDATE, DELETE, or TRUNCATE any table in any database.
v The second GRANT command restricts user1 permissions specifically on the dev
database. When user1 connects to dev, user1 can run only SELECT, INSERT, or
UPDATE operations on tables within that database.
v The third GRANT command overrides privileges for user1 on the customer table
within the dev database. As a result of this command, the only actions that user1
can perform on the customer table in the dev database are SELECT and LOAD.
The following table lists the slash commands that display the privileges for users
and groups:
Table 11-3. Slash commands to display privileges
Command Description
\dg List groups (both user and resource groups) except _ADMIN_.
\dG List user groups and their members.
\dp <user> List the privileges that were granted to a user either directly or
by membership in a user group.
\dpg <group> List the privileges that were granted to a user group.
\dpu <user> List the privileges that were granted to a user directly and not
by membership in a user group.
\du List users.
\dU List users who are members of at least one user group and the
groups of which each is a member.
When you revoke privileges, make sure you sign on to the same database (and, for
a multiple schema system, the same schema) where you granted the privileges, or
use the fully qualified name forms that match the locations in which you granted
the privileges. Then, you can use these slash commands to verify the results.
When you revoke a privilege from a group, all the members of that group lose the
privilege unless they have the privilege from membership in another group or
through their user account.
For example, to revoke the Insert privilege for the group public on the table films,
enter:
SYSTEM.ADMIN(ADMIN)=> REVOKE INSERT ON films FROM PUBLIC;
REVOKE
Privileges by object
A privilege applies only to the object for which it was granted. For example, when
you grant a user the Drop privilege for a database, that privilege does not apply to
any of the objects within that database, but only to the database itself.
If you change the default schema for a database, the user automatically inherits
access to the new default schema and loses access to the previous default. If a user
requires access to the previous default schema, that user must be explicitly granted
access to that schema.
The following table describes the rules for each of these objects.
Table 11-5. Indirect object privileges
Object type Access rule
Client session Users can see a session's user name and query if that user object is
viewable. Users can see the connected database name if that
database object is viewable. Users must have the Abort privilege on
another user or be the system administrator to abort another user's
session or transaction.
Database statistic The system displays operational statistics for database-related objects
if the corresponding object is viewable. For example, you can see the
disk space statistics for a table if you can see the table.
Related tasks:
“Viewing record distribution” on page 12-9
As described in “Default Netezza groups and users” on page 11-3, the default
admin user account is a powerful database super-user account. Use that account
rarely, such as for documented maintenance or administrative tasks, or when you
first set up an IBM Netezza system.
Procedure
1. Connect to the System database as the admin user. For example:
[nz@nzhost ~]$ nzsql -d system -u admin -pw password
Welcome to nzsql, the Netezza SQL interactive terminal.
2. Create a group for your administrative users. For example:
SYSTEM.ADMIN(ADMIN)=> CREATE GROUP administrators;
CREATE GROUP
3. Grant the group all administrative permissions. For example:
SYSTEM.ADMIN(ADMIN)=> GRANT ALL ADMIN TO administrators WITH GRANT
OPTION;
GRANT
4. Grant the group all object permissions. For example:
SYSTEM.ADMIN(ADMIN)=> GRANT ALL ON DATABASE, GROUP, SCHEMA, SEQUENCE,
SYNONYM, TABLE, EXTERNAL TABLE, FUNCTION, AGGREGATE, USER, VIEW, PROCEDURE,
LIBRARY TO administrators WITH GRANT OPTION;
GRANT
5. Add users to the group to grant them the permissions of the group. For
example:
SYSTEM.ADMIN(ADMIN)=> ALTER USER jlee WITH IN GROUP administrators;
ALTER USER
or
SYSTEM.ADMIN(ADMIN)=> ALTER GROUP administrators WITH USER jlee, bob;
ALTER GROUP
Related concepts:
“Resource allocations for the admin user” on page 15-15
The admin user account is treated as if it belongs to a resource group that receives
50% of net system resources. Consequently, the admin user has a unique and
powerful impact on resource availability.
Logon authentication
The IBM Netezza system offers several authentication methods for Netezza
database users:
Local authentication
Netezza administrators define the database users and their passwords by
using the CREATE USER command or through the Netezza administrative
interfaces. In local authentication, you use the Netezza system to manage
Authentication is a system-wide setting; that is, your users must be either locally
authenticated or authenticated by using the LDAP or Kerberos method. If you
choose LDAP or Kerberos authentication, you can create users with local
authentication on a per-user basis. You cannot use LDAP and Kerberos at the same
time to authenticate users. Netezza host supports LDAP or Kerberos authentication
for database user logins only, not for operating system logins on the host.
Related concepts:
“Encrypted passwords” on page 2-13
“User authentication method” on page 11-4
“Configure the Netezza host authentication for clients” on page 11-31
Local authentication
Local authentication validates that the user name and password that are entered
with the logon match the ones that are stored in the IBM Netezza system catalog.
The manager process that accepts the initial client connection is responsible for
initiating the authentication checks and disallowing any future requests if the
check fails. Because users can make connections across the network, the system
sends passwords from clients in an opaque form.
The Netezza system manages user names and passwords. It does not rely on the
underlying (Linux) operating system user name and password mechanism, other
than on user nz, which runs the Netezza software.
Note: When you create a user for local authentication, you must specify a
password for that account. You can explicitly create a user with a NULL password,
but the user is not allowed to log on if you use local authentication.
LDAP authentication
The LDAP authentication method differs from the local authentication method in
that the IBM Netezza system uses the user name and password that is stored on
the LDAP server to authenticate the user.
Following successful LDAP authentication, the Netezza system also confirms that
the user account is defined on the Netezza system. The LDAP administrator is
responsible for adding and managing the user accounts and passwords and
deactivating accounts on the LDAP server.
After you use the SET AUTHENTICATION command or make any manual
changes to the ldap.conf file, restart the Netezza system by using the nzstop and
nzstart commands. This ensures that the Netezza system uses the latest settings
from the ldap.conf file.
The command does not use any of the settings from previous command instances;
make sure that you specify all the arguments that you require when you use the
command. The command updates the ldap.conf file for the configuration settings
that are specified in the latest SET AUTHENTICATION command.
Note: After you change to LDAP authentication, if you later decide to return to
local authentication, you can use the SET AUTHENTICATION LOCAL command
to restore the default behavior. When you return to local authentication, the
command overwrites the ldap.conf file with the ldap.conf.orig file (that is, the
ldap.conf file that resulted after the first SET AUTHENTICATION LDAP
11-20 IBM Netezza System Administrator’s Guide
command was issued). The Netezza system then starts to use local authentication,
which requires user accounts with passwords on the Netezza system. If you have
Netezza user accounts with no passwords or accounts that were created with a
NULL password, use the ALTER USER command to update each user account
with a password.
With SSL, the Netezza system and LDAP server use additional protocols to
confirm the identity of both servers by using digital certificates. You must obtain
certificate authority (CA) files from the LDAP server and save them in a directory
on the Netezza system. You need three files: a root certificate, the CA client
certificate, and the CA client keys file. These files typically have the extension .pem.
Important: During this procedure, you must manually edit the ldap.conf file to
specify the locations of the CA cert files. When you edit the file, do not delete any
existing lines, even those lines that display as comments, as they are often used by
the LDAP configuration commands. Add the new configuration settings for the
LDAP CA certificates.
To configure SSL security for your LDAP server communications, complete the
following steps:
Procedure
1. Obtain the three CA certificate files from the LDAP server, and save them on a
location on the Netezza system. For Netezza high availability (HA) systems,
save the files in a location on the shared drive, such as a new directory under
/nz. Both HA hosts must be able to access the certificate files by using the same
path name.
2. Use the SET AUTHENTICATION LOCAL command to temporarily restore
local authentication. The command overwrites the ldap.conf file with the
ldap.conf.orig backup file.
3. With any text editor, append the following three lines to the /etc/ldap.conf
file and save the file:
tls_cacertfile pathname_to_cacert.pem_file
tls_cert pathname_to_clientcrt.pem_file
tls_key pathname_to_clientkey.pem_file
For example:
tls_cacertfile /nz/certs/cacert.pem
tls_cert /nz/certs/clientcrt.pem
tls_key /nz/certs/clientkey.pem
4. Use the SET AUTHENTICATION LDAP SSL ON command and any additional
configuration arguments (based on your LDAP server configuration) to restore
the LDAP authentication. Since the server transitions from local to LDAP
authentication, it copies the ldap.conf file with your new certificate path names
to ldap.conf.orig, and enables LDAP authentication.
After you use the SET AUTHENTICATION command or make any manual
changes to the ldap.conf file, restart the Netezza system by using the nzstop and
nzstart commands. This ensures that the Netezza system uses the latest settings
from the ldap.conf file.
Kerberos authentication
If your environment uses Kerberos authentication to validate users, you can use
Kerberos instead of local or LDAP authentication to validate your Netezza
database user accounts.
With Kerberos authentication, users are first validated against the user name and
password that is stored on the Kerberos server. After successful Kerberos
authentication, the IBM Netezza system then confirms that the user account is
defined as a Netezza database user.
The Kerberos administrator is responsible for adding and managing the user
accounts and passwords and deactivating accounts on the Kerberos server. The
Netezza administrator must ensure that each Netezza database user is also defined
within the Netezza system catalog.
Important: The Kerberos and Netezza user names must match. When you
configure the system to use Kerberos authentication, you can specify
USERCASE=MATCHDB to convert unescaped Kerberos names to the Netezza
system letter case, which is uppercase by default. If you specify
USERNAME=KEEP, the Kerberos names are not converted, and the Kerberos and
Netezza names must match exactly, including letter casing. If the user names do
not match, the Netezza administrator can use the ALTER USER command to
change the Netezza user name to match the Kerberos user name, or contact the
Kerberos administrator to change the Kerberos name to match the Netezza user
name.
If you choose to use Kerberos authentication, then all database user accounts
except admin are authenticated by Kerberos. You can configure database user
accounts to be locally authenticated as an exception. This implementation does not
support mixed Kerberos and LDAP authentication modes; that is, you cannot
authenticate some users by LDAP authentication and some by Kerberos.
The following table lists the supported operating systems and revisions for the
Netezza CLI clients.
Table 11-6. Netezza supported platforms for Kerberos authentication
Operating system 32-bit 64-bit
Windows
Windows 2008, Vista, 7 Intel / AMD Intel / AMD
Windows Server 2012 N/A Intel / AMD
Linux
Red Hat Enterprise Linux 5.3, 5.5, 5.7, 5.9, 6.1, Intel / AMD Intel / AMD
6.2, 6.4, 6.5 (see note below table)
Red Hat Enterprise Linux 6.2+ N/A PowerPC
SUSE Linux Enterprise Server 11 Intel / AMD Intel / AMD
SUSE Linux Enterprise Server 10 and 11, and IBM System z IBM System z
Red Hat Enterprise Linux 5.x
UNIX
IBM AIX 6.1 with 5.0.2.1 C++ runtime libraries, N/A PowerPC
7.1
HP-UX 11i versions 1.6 and 2 (B.11.22 and Itanium Itanium
B.11.23)
Oracle Solaris 9, 10, 11 SPARC SPARC
Oracle Solaris 10 x86 x86
Note: For many client platforms, Kerberos 1.12 support might not be available
from the operating system vendor. In these cases, you must download the Kerberos
source code from the MIT Kerberos website and build it on your local systems. A
minimum of release 1.10 is required for full support of features, but version 1.12 is
recommended.
On Windows platforms, you must use MIT Kerberos for Windows 4.0.1 to enable
multiple-user support.
In many Kerberos environments, the krb5.conf file is already available for the
Kerberos client support. Consult with your Kerberos administrator to see if a copy
of the krb5.conf file is available that you can store on the Netezza appliance.
Optionally, a Netezza administrator can generate a minimal version of the file, or
update the file, for a simple configuration setup using the SET
AUTHENTICATION command.
If your Kerberos administrator supplies a krb5.conf file for each client that is
added into the Kerberos authentication environment, follow these steps to add that
configuration file to the Netezza appliance and enable Kerberos authentication.
1. Log in to the Netezza active host as the nz user.
2. Change to the $NZ_DATA/config directory (usually /nz/data/config).
3. Save your Kerberos configuration file as krb5.conf in the config directory.
4. Connect to the NPS database as the admin user or any database user who has
Manage System privileges.
5. Type the following command to enable system-wide Kerberos authentication:
SYSTEM.ADMIN(ADMIN)=> SET AUTHENTICATION KERBEROS;
NOTICE: Updating /nz/data.1.0/config/krb5.conf and other files.
NOTICE: Re-log-in or open a new shell for changes to take effect.
SET VARIABLE
If your environment does not have a specific Kerberos configuration file, you can
create one for the Netezza system with the basic required information. Before you
begin, make sure that you obtain the name of the Kerberos realm and KDC from
your Kerberos administrator.
1. Log in to the Netezza system as the nz user.
2. Connect to the NPS database as the admin user or any database user who has
Manage System privileges.
3. Type the following command:
The Kerberos administrator adds the Netezza host as three service principals to the
Kerberos database. The three definitions represent the two host names for the
Netezza HA hosts and the Netezza floating host name.
The Kerberos administrator can run these commands on the Kerberos server or any
client in the Kerberos realm where the Netezza appliance will be a member. The
Kerberos administrator could also run these commands from the Netezza host 1
after the Kerberos configuration file (krb5.conf) has been added to the Netezza
host.
Note: To use kadmin on the Netezza host, the nz user must have the
KRB5_CONFIG variable set in the .bashrc file. The variable is added by the SET
AUTHENTICATION KERBEROS command, but if you have not yet run that
command, you might need to set the variable manually to point to the
/nz/data/config/krb5.conf file.
The Netezza appliance uses the "netezza" service name. The following sample
commands show how to configure the service principals for Netezza host 1
(mynpshost-ha1), Netezza host 2 (mynpshost-ha2), and the floating host name
(mynpshost-pri).
% kadmin –p KerberosAdmin/admin
kadmin: ktadd –k /nz/data/config/krb5.keytab
netezza/mynpshost-ha1.mycompany.com
netezza/mynpshost-ha2.mycompany.com
netezza/mynpshost-pri.mycompany.com
kadmin: quit
If the krb5.keytab file was created on another system and must be copied to the
Netezza appliance, do the following steps:
1. Log in to the Netezza active host as the nz user.
Note: If you place your keytab file in a location other than /nz/data/config, set
the KRB5_KTNAME environment variable to define the custom location. You could
set it in the .bashrc file for the nz user account. After you set the variable, stop
and restart NPS so that the software can find the keytab file.
If the Kerberos realm or KDC information changes, you can run the SET
AUTHENTICATION command and specify the new values to make those values
take effect.
After you change the Kerberos configuration, you should create a new host backup
using the nzhostbackup command to capture the latest configuration.
For a Netezza appliance, the ticket cache location must be on the shared mount
points (either /nz or /export/home) so that tickets can be accessed after a host
failover from the active Netezza host to the standby host.
For configurations where single user tickets that are stored in a cache file, by
default, Kerberos caches the tickets in the /tmp directory. The /tmp directory is not
a shared mount area for the HA hosts. You can use the KRB5CCNAME
environment variable to specify the location for the tickets. You should set the
variable in the nz user's .bashrc file for the nz user account. After you set the
variable, export the variable so that Kerberos uses the new location for the ticket
cache. A sample setting follows:
KRB5CCNAME=FILE:/nz/krb5cc_500
If you use Kerberos authentication, note that detailed messages and error
conditions are written to the /nz/kit/log/postgres/pg.log file.
In addition, make sure that the name of the NPS service machine was added
correctly to the KDC server by the Kerberos administrator.
After you enable Kerberos authentication, the database users that you create
should use Kerberos authentication as well. (You can create locally authenticated
users as an exception.) If a Netezza administrator creates a local authentication
user account without the AUTH "local" exception syntax, and the user attempts to
start a database connection, the connection fails with the error Password
authentication failed for user 'user_name'. Make sure that your Kerberos users
are configured to use DEFAULT authentication when you enable Kerberos as your
authentication method for the database system. You must have a matching
Kerberos user for every database user except the admin user.
The following information about passwords and logons applies regardless of the
authentication method.
Procedure
1. Use a standard editor and open the configuration file /nz/data/
postgresql.conf.
2. Locate the line that contains invalid_attempts.
3. Copy the line, paste the copy after the current line, remove the comment
character (#), and change the value for invalid_attempts.
4. Save your changes.
5. Restart the IBM Netezza system for your changes to take effect.
Results
If the admin user is locked, you can unlock it by using one of the administrative
group of users. If you do not have any users who are granted Alter privileges on
user objects or the admin account, contact Netezza Support to unlock the admin
account.
Procedure
1. Use a standard editor and open the configuration file /nz/data/
postgresql.conf.
2. Search for an existing definition for the auth_timeout variable.
3. If the auth_timeout variable is defined in the file, change the value to the
number of seconds that you want to use for the timeout. Otherwise, you can
define the variable by adding the following line to the file. Add the line to the
Security Settings section of the file.
auth_timeout = number_of_seconds
4. Save your changes.
5. Restart the IBM Netezza system for your changes to take effect.
The Netezza client users must specify security arguments when they connect to
Netezza systems. The nzsql command arguments are described in the IBM Netezza
Database User’s Guide. For a description of the changes that are needed for the
ODBC and JDBC clients, see the IBM Netezza ODBC, JDBC, OLE DB, and .NET
Installation and Configuration Guide.
By default, the IBM Netezza system and clients do not use peer authentication to
verify each other's identity. If you want to authenticate connection peers, you must
create or obtain from a CA vendor the server certificate and keys file and the CA
root certificate for the client users. The Netezza system has a default set of server
certificates and keys files (server-cert.pem and server-keys.pem) in the
/nz/data/security directory. Netezza supports files that use the .pem format.
If you use your own CA certificate files, make sure that you save the server CA
files in a location under the /nz directory. If you have an HA Netezza system, save
the certificates on the shared drive under /nz so that either host can access the files
by using the same path name. You must also edit the /nz/data/postgresql.conf
file to specify your server certificate files.
To edit the postgresql.conf file to add your own CA server certificate and keys
files, complete the following steps:
Procedure
1. Log in to the Netezza system as the nz user account.
2. With any text editor, open the /nz/data/postgresql.conf file.
# Uncomment the lines below and mention appropriate path for the
# server certificate and key files. By default the files present
# in the data directory will be used.
#server_cert_file=’/nz/data/security/server-cert.pem’
#server_key_file=’/nz/data/security/server-key.pem’
4. Delete the number sign (#) character at the beginning of the server_cert_file
and server_key_file parameters and specify the path name of your CA server
certificate and keys files where they are saved on the Netezza host.
Important: Make sure that the keys file is not password protected; by default,
it is not.
5. Save and close the postgresql.conf file.
Results
Any changes that you make to the postgresql.conf file take effect the next time
that the Netezza system is stopped and restarted.
If your users are located within the secure firewall of your network or they use a
protocol such as ssh to securely connect to the Netezza system, you might require
them to use unsecured communications, which avoids the performance overhead
of secured communications. If you have one or more clients who are outside that
firewall, you might require them to use secured connections. The Netezza system
provides a flexible way to configure access security and encryption for your client
users.
To configure and manage the client access connections, you use the SET
CONNECTION, DROP CONNECTION, and SHOW CONNECTION commands.
These commands manage updates to the /nz/data/pg_hba.conf file for you, and
provide mechanisms for remote updates, concurrent changes from multiple
administrators, and protection from accidental errors when you edit the file.
Important: Never edit the /nz/data/pg_hba.conf file manually. Use the Netezza
SQL commands to specify the connection records for your Netezza system.
In the sample output, the connection requests define the following capabilities:
v Connection ID 1 specifies that the Netezza host accepts connection requests from
any local user (someone that is logged in directly to the Netezza) to all
databases.
Important: The host might accept a connection request, but the user must still pass
account authentication (user name and password verification), and have
permissions to access the requested database.
The first record that matches the client connection information is used for
authentication. If the first chosen record does not work, the system does not look
for a second record. If no record matches, access is denied. With the default records
previously shown, any client user who accesses the Netezza system and has correct
user account and password credentials is allowed a connection; they can request
either secured or unsecured connections, as the Netezza host accepts either type.
For example, if you have one user who connects from outside the network firewall
from an IP address 1.2.3.4, you might want to require that client to use secured SSL
connections. You can create a connection record for that user by using the
following sample command:
SYSTEM.ADMIN(ADMIN)=> SET CONNECTION HOSTSSL DATABASE ’ALL’ IPADDR ’1.2.3.4’
IPMASK ’255.255.255.255’ -AUTH SHA256;
SET CONNECTION
This example shows the importance of record precedence. Record ID 2 is the first
match for all of the users who remotely connect to the IBM Netezza system.
Because it is set to host, this record allows either secured or unsecured connections
that are based on the connection request from the client. To ensure that the user at
1.2.3.4 is authenticated for a secure connection, drop connection record 2 and add
it again by using a new SET CONNECTION record to place the more general
record after the more specific record for 1.2.3.4.
The IBM Netezza system calculates the limit for each user based on the following
rules:
v If the attribute is set for the user account, use that value.
v If the attribute is not set for the USER, use the MOST RESTRICTIVE value set
for any of the groups of which that user is a member.
v If the attribute is not set for the user or any of the user’s groups, use the system
default value.
When you change these values, the system sets them at session startup and they
remain in effect for the duration of the session.
You specify the system defaults with the SET SYSTEM DEFAULT command. To
display the system values, use the SHOW SYSTEM DEFAULT command.
v To set a system default, use a command similar to the following, which sets the
default session timeout to 300 minutes:
SYSTEM.ADMIN(ADMIN)=> SET SYSTEM DEFAULT SESSIONTIMEOUT TO 300;
SET VARIABLE
v To show the system default for the session timeout, use the following syntax:
SYSTEM.ADMIN(ADMIN)=> SHOW SYSTEM DEFAULT sessiontimeout;
NOTICE: ’session timeout’ = ’300’
SHOW VARIABLE
Password expiration
You can specify the number of days that an IBM Netezza database user account
password is valid as a system-wide setting. You can also specify the password
expiration rate on a per-user and per-group basis. You can also expire an account
password immediately.
To set a system-wide control for expiring database user account passwords, use the
SET SYSTEM DEFAULT SQL command:
SYSTEM.ADMIN(ADMIN)=> SET SYSTEM DEFAULT PASSWORDEXPIRY TO days;
SET VARIABLE
The days value specifies the number of days that the password is valid, since the
last date when the password changed. If you do not want passwords to expire,
specify a value of 0. The default system setting is 0.
You can specify the account password expiration by using the PASSWORDEXPIRY
option of the [CREATE|ALTER] USER and [CREATE|ALTER] GROUP SQL
commands. Some example commands follow.
v To create a group that has a password expiration rate of 45 days:
MYDB.SCH1(USER)=> CREATE GROUP staff WITH PASSWORDEXPIRY 45;
v To change the expiration setting for the user sales_user to 30 days:
MYDB.SCH1(USER)=> ALTER USER sales_user WITH PASSWORDEXPIRY 30;
The admin user, the owner of the user, or a user who has Alter privilege for the
user can immediately expire the user account password by using the following
command:
SYSTEM.ADMIN(ADMIN)=> ALTER USER myuseracct EXPIRE PASSWORD;
ALTER USER
If the user is connected to a database, the expiration does not affect the current
session. The next time that the user connects to a database, the user has a
restricted-access session and must change the password by using the ALTER USER
command.
Note: Rowset limits apply only to user table and view queries, not to system
tables and view queries.
You can also impose rowset limits on both individual users and groups. In
addition, users can set their own rowset limits. The admin user does not have a
limit on the number of rows a query can return.
If rowset limits were applied to the results of a query, the nzsql command displays
a message and the system writes a notice to the /nz/kit/log/postgres/pg.log file.
An example follows, but note that the 100 records are not shown in the example:
MYDB.ADMIN(MYUSR)=> select * from ne_orders;
NOTICE: Rowset limit of 100 applied
O_ORDERKEY | O_CUSTKEY | O_ORDERSTATUS
------------+-----------+---------------
96 | 32333333 | F
128 | 22186669 | F
...
Client users who access the NPS host can use APIs such as the ODBC
SQLGetDiagRec() or JDBC getwarning to capture the rowset limit message.
For commands that perform INSERT TO ... SELECT FROM or CREATE TABLE AS
... SELECT operations, the rowset limit can affect the results by limiting the
number of rows that are inserted to the resulting table. If you are using these
commands to create user tables, you can override the rowset limit within your user
session to ensure that those queries complete with all the matching rows. This
override does not change the limit for other SELECT queries, or for INSERT TO ...
SELECT FROM or CTAS queries that write to external table destinations.
To override the rowset limit for INSERTS and CTAS operations in a session,
complete the following table:
Procedure
1. Open a session with the IBM Netezza database and log in by using your
database user account.
2. Use the following command to set the session variable.
MY_DB.MYSCHEMA(NZUSER)=> SET ROWSETLIMIT_LEVEL=0;
SET VARIABLE
3. Use the following command to show the status of the rowset limit for the
session:
MY_DB.MYSCHEMA(NZUSER)=> SHOW ROWSETLIMIT_LEVEL;
NOTICE: ROWSETLIMIT_LEVEL is off
Results
To disable the override and restore the limit to all queries, set the value of the
rowsetlimit_level session variable to 1 (on).
Note: To receive a message, you must enable the runawayQuery event rule.
Changes to the query timeout for the public group do not affect the admin user's
settings.
v To create a user with a query timeout, use the following syntax:
CREATE USER username WITH QUERYTIMEOUT [number | UNLIMITED]
v To create a group with a query timeout, use the following syntax:
CREATE GROUP name WITH QUERYTIMEOUT [number | UNLIMITED]
v To modify a user's query timeout, use the following syntax:
ALTER USER username WITH QUERYTIMEOUT [number | UNLIMITED]
v To modify a group's query timeout, use the following syntax:
ALTER GROUP name WITH QUERYTIMEOUT [number | UNLIMITED]
Session timeout
You can place a limit on the amount of time a SQL database session is allowed to
be idle before the system terminates it. You can impose timeouts on both
individual users and groups. In addition, users can set their own timeouts.
Changes to the session timeout for the public group do not affect the admin user's
settings.
v To create a user with a session timeout, use the following syntax:
CREATE USER username WITH SESSIONTIMEOUT [number | UNLIMITED]
v To create a group with a session timeout, use the following syntax:
CREATE GROUP name WITH SESSIONTIMEOUT [number | UNLIMITED]
v To modify a user's session timeout, use the following syntax:
ALTER USER username WITH SESSIONTIMEOUT [number | UNLIMITED]
v To modify a group's session timeout, use the following syntax:
ALTER GROUP name WITH SESSIONTIMEOUT [number | UNLIMITED]
Session priority
You can define the default and maximum priority values for a user, a group, or as
the system default. The system determines the value to use when the user connects
to the host and executes SQL commands.
The default priority for users, groups, and the system is none. If you do not set
any priorities, user sessions run at normal priority.
Procedure
1. Using any text editor, modify the file /nz/data/postgresql.conf.
2. Add or change the following parameter: debug_print_query = true
Results
To log information on the client Windows system, complete the following steps:
Procedure
1. Click Start > Settings > Control Panel > Data sources (ODBC).
2. In the ODBC Data Source Administration screen, click the System DNS tab.
3. Select NZSQL > Configure.
4. In the NetezzaSQL ODBC Datasource Configuration screen, enter information
about your data source, database, and server.
5. Select Driver Options.
6. In the Netezza ODBC Driver Configuration, select the CommLog check box.
This action causes the system to create a file that contains the following
information:
v The connection string
Results
The system writes log information to the file specified in the Commlog check box
(for example, C:\nzsqlodbc_xxxx.log).
During database initialization, IBM Netezza grants the group public list and select
privileges for system views that retrieve information about users.
The following table describes some common system views and the type of
information that the view provides. In some cases, the view returns more than the
information listed in the table.
Table 11-10. Public views
View name Data returned
_v_aggregate Objid, aggregate name, owner, and create date
_v_database Objid, Database name, owner, and create date
_v_datatype Objid, data type, owner, description, and size
_v_function Objid, function name, owner, create date, description, result
type, and arguments
_v_group Objid, Group name, owner, and create date
_v_groupusers Objid, Group name, owner, and user name
_v_operator Objid, operator, owner, create date, description, opr name, opr
left, opr right, opr result, opr code, and opr kind
_v_procedure Objid, procedure, owner, create date, object type, description,
result, number of arguments, arguments, procedure signature,
built in, procedure source, proc, executed as owner
_v_relation_column Objid, object name, owner, create date, object type, attr number,
attr name, attr type, and not null indicator
_v_relation_column_def Objid, object name, owner, create date, object type, attr number,
attr name, and attr default value
_v_relation_keydata Database owner, relation, constraint name, contype, conseq, att
name, pk database, pk owner, pk relation, pk conseq, pk att
name, updt_type, del_type, match_type, deferrable, deferred,
constr_ord
_v_sequence Objid, seq name, owner, and create date
_v_session ID, PID, UserName, Database, ConnectTime, ConnStatus, and
LastCommand
_v_table Objid, table name, owner, and create date
_v_table_dist_map Objid, table name, owner, create date, dist attr number, and dist
attr name
_v_user Objid, user name, owner, valid until date, and create date
_v_usergroups Objid, user name, owner, and group name
The following table describes some views that show system information. You must
have administrator privileges to display these views.
Table 11-11. System views
View name Output
_v_sys_group_priv GroupName, ObjectName, DatabaseName, Objecttype, gopobjpriv,
gopadmpriv, gopgobjpriv, and gopgadmpriv
_v_sys_index objid, SysIndexName, TableName, and Owner
_v_sys_priv UserName, ObjectName, DatabaseName, aclobjpriv, acladmpriv,
aclgobjpriv, and aclgadmpriv
_v_sys_table objid, SysTableName, and Owner
_v_sys_user_priv UserName, ObjectName, DatabaseName, ObjectType, uopobjpriv,
uopadmpriv, uopgobjpriv, and uopgadmpriv
_v_sys_view objid, SysViewName, and Owner
This section describes some basic concepts of Netezza databases, and some
management and maintenance tasks that can help to ensure the best performance
for user queries.
You can manage Netezza databases and their objects by using SQL commands that
you run through the nzsql command and by using the IBM Netezza Performance
Portal, NzAdmin tool, and data connectivity applications like ODBC, JDBC, and
OLE DB. This section focuses on running SQL commands (shown in uppercase,
such as CREATE DATABASE) from the nzsql command interface to perform tasks.
Initially, only the admin user can create databases, but the admin user can grant
other users permission to create databases as described in Chapter 11, “Security
and access control,” on page 11-1 You cannot delete the system database. The
admin user can also make another user the owner of a database, which gives that
user admin-like control over that database and its contents.
The database creator becomes the default owner of the database. The owner can
remove the database and all its objects, even if other users own objects within the
database.
Starting in Release 7.0.3, Netezza supports the ability to define multiple schemas
within each database. You can use schemas to organize the objects, as well as to
give users areas within the database for development and testing. For example,
within the database Prod, you could have different schemas for different users,
where they can define different objects such as tables, views, and so on. Each
schema has an owner, who has full privileges to all objects in the schema. Users
can be granted access to schemas of the database, and they can view or manage
objects in that schema, but they will not see or be able to manage objects in other
schemas unless they are explicitly granted privileges to that schema and its objects.
In addition, schemas allow you to reuse object names. For example, you could
have a table tab1 in schema1 and table tab1 in schema2. Although they share the
same name, those two tab1 tables are different tables.
Note: The word schema has several definitions. The most common usage in a
database management system refers to the definition of a database, its objects, and
the rules for how those objects are related and organized in the database.
Throughout the Netezza documentation, the word schema can be found in both
In previous releases, the Netezza system supported one default schema per
database. The default and only schema matched the name of the database user
who created the database. If the ownership for the database changed, the schema
name would also change to match. If users specified a schema for objects, the
Netezza system ignored the schema and used the default schema for all operations.
For Netezza systems running release 7.0.3 or later, you can configure the system to
support schemas within a database. If you enable this support, the admin user and
privileged users can create and manage schemas. The system also validates the
schema information, and you can configure whether invalid schemas return a
warning or an error. For example, you can configure the system to return an error
for any queries that specify an invalid or non-existent schema, or you can
configure the system to return a warning message for queries that use an invalid
schema and to use the default schema for a database.
Procedure
1. Log in to the Netezza active host as the nz user.
2. Using a text editor, open the /nz/data/postgresql.conf file.
3. Locate the enable_schema_dbo_check variable and uncomment it by deleting the
# character at the beginning of the line.
4. Change the value of the variable to one of the following values:
v 0 (the default) places the system in legacy behavior where users cannot
create, manage, set, or drop schemas. The system ignores any schema
information and uses the current schema for the database to which the client
is connected.
v 1 enables multiple schema support in limited mode. Users can create, alter,
set, and drop schemas. If a query references an invalid schema, the system
displays a warning message and uses the current schema for the database
session or a database’s default schema for cross-database queries.
v 2 enables enforced support for multiple schemas. Users can create, alter, set,
and drop schemas. If a query references an invalid schema, the query returns
an error.
5. Save the changes to the postgresql.conf file and close the file.
6. Run the nzstop and then the nzstart commands to restart the Netezza software
for the change to take effect.
Results
After restarting the systems, permitted users can create and manage schemas
within a database.
Following the steps in “Enable multiple schema support” on page 12-2, change the
value of the enable_schema_dbo_check variable to 0 and restart the Netezza
software. Use caution when you disable the multiple schema support, because the
schemas still exist in the databases but users cannot create, alter, drop, or use the
SET SCHEMA command to change to any schemas that are not the database
default schema. Users can still reference the schemas in object names within their
queries.
In most cases, disabling schema support is usually done only if you must
downgrade a Netezza system to a release before 7.0.3 where multiple schemas per
database are not supported.
Note: The extent and page sizes maximize the performance of read operations
within the Netezza system. The extent size maximizes the performance of the disk
scan operations, and the page size maximizes the performance of the FPGA as it
reads the data that streams from disk.
For example, assume that you create a table and insert only one row to the table.
The system allocates one 3 MB extent on a data slice to hold that row. The row is
stored in the first 128-KB page of the extent. If you view the table size by using a
tool such as the NzAdmin interface, the table shows a Bytes Allocated value of 3
MB (the allocated extent for the table), and a Bytes Used value of 128 KB (the used
page in that extent).
For tables that are well distributed with rows on each data slice of the system, the
table allocation is a minimum of 3 MB x <numberOfDataSlices> of storage space. If
you have an evenly distributed table with 24 rows on an IBM PureData System for
Analytics N1001-002 or IBM Netezza 1000-3 system, which has 24 data slices, the
table allocates 3 MB x 24 extents (72 MB) of space for the table. That same table
uses 128 KB x 24 pages, or approximately 3 MB of disk space.
The Bytes Allocated value is always larger than the Bytes Used value. For small
tables, the Bytes Allocated value might be much larger than the Bytes Used value,
especially on multi-rack Netezza systems with hundreds of data slices. For larger
tables, the Bytes Allocated value is typically much closer in size to the Bytes Used
value.
The following table describes the amount of disk space the following data types
use.
Table 12-1. Data type disk usage
Data type Usage
big integers (INT8) 8 bytes
integers (INT4) 4 bytes
small integers (INT2) 2 bytes
tiny integers (INT1) and bools 1 byte
numerics of more than 18 digits of precision 16 bytes
numerics with 10 - 18 digits 8 bytes
numerics of 9 or fewer digits 4 bytes
float8s 8 bytes
float4s 4 bytes
times with time zone and intervals 12 bytes
times and timestamps 8 bytes
dates 4 bytes
char(16) 16 bytes
char(n*) and varchar(n) N+2, or fewer, bytes, depending on
actual content
The char data types of more than 16 bytes are
represented on disk as if they were varchar data
types of the same nominal size.
char(1) 1 byte
nchar(n*) and nvarchar(n) N+2 to (4 * N) + 2
When you run the nzload command, the IBM Netezza host creates records and
assigns rowids. The SPUs can also create records and assign rowids, which
happens when you use the command CREATE TABLE <tablename> AS SELECT.
The system gives the host and each of the SPUs a block of sequential rowids that
they can assign. When they use up a block, the system gives them another block,
which explains why the rowids within a table are not always sequential.
The system stores the rowid with each database record. It is an 8-byte integer
value.
You can use the rowid keyword in a query to select, update, or delete records. For
example:
SELECT rowid, lname FROM employee_table;
UPDATE employee_table SET lname = 'John Smith' WHERE rowid = 234567;
Querying by some other field, such as name, might be difficult if you have ten
John Smiths in the database.
In a new installation, the initial rowid value is 100,000. The next available rowid
value is stored in the /nz/data/RowCounter file.
Transaction IDs
Transaction IDs (xids) are sequential in nature. Each database record includes two
xid values:
v A transaction ID that created the record
v A transaction ID that deleted the record (which is set to 0 if it is not deleted)
Because the system does not update records in place on the disk, data integrity is
preserved (write once), and rollback and recovery operations are simplified and
accelerated.
When you run a query (or backup operation), the system allows the query to
access any record that was created, but not deleted, before this transaction began.
Because xid values are sequential, the system compares the create xid and delete
xid values to accomplish this.
The size of the xid allows for over 100 trillion transaction IDs, which would take
over 4000 years to use up at the rate of one transaction per millisecond. In actual
practice, transaction IDs in a IBM Netezza system are likely to be generated at a
slower rate and would take longer to exhaust.
Distribution keys
Each table in an IBM Netezza database has only one distribution key. The key can
consist of one to four columns of the table.
Important: You cannot update the columns that you select as distribution keys.
You can use the following Netezza SQL command syntax to create tables and
specify distribution keys:
v To create an explicit distribution key, the Netezza SQL syntax is:
CREATE TABLE <tablename> [ ( <column> [, ... ] ) ]
DISTRIBUTE ON [HASH] ( <column> [ ,... ] ) ;
The phrase DISTRIBUTE ON specifies the distribution key, the word HASH is
optional.
v To create a table without specifying a distribution key, the Netezza SQL syntax
is:
CREATE TABLE <tablename> (col1 int, col2 int, col3 int);
The Netezza system selects a distribution key. There is no way to guarantee
what that key is and it can vary depending on the Netezza software release.
v To create a random distribution, the Netezza SQL syntax is:
CREATE TABLE <tablename> [ ( <column> [, ... ] ) ]DISTRIBUTE ON RANDOM;
You can also use the NzAdmin tool to create tables and specify the distribution
key. For more information about the CREATE TABLE command, see the IBM
Netezza Database User’s Guide.
For example, perhaps you have a large table with many records and columns, and
want to create a summary table from it, maybe with just one day of data, and with
only some of the original columns. If the new table uses the same distribution key
as the original table, then the new records will reside on the same data slices as the
original table records. The system has no need to send the records to the host (and
consume transmission time and host processing power). Rather, the SPUs create
the records locally. The SPUs read from the same data slices and write back out to
same data slices. This way of creating a table is much more efficient. In this case,
the SPU is basically communicating with only its data slices.
When you create a subset table or temp table, you do not specify a new
distribution key or distribution method. Instead, allow the new table to inherit the
distribution key of the parent table. This avoids the extra data distribution that can
occur because of the non-match of inherited and specified keys.
The Netezza architecture distributes processing across many individual SPUs each
with its own dedicated memory and data slices. These individual processors
operate in a “shared nothing” environment that eliminates the contention for
shared resources which occurs in a traditional SMP architecture. In a collocated
join, each SPU can operate independently of the others without network traffic or
communication between SPUs.
When the system redistributes data, it sends each record in the table to a single
SPU, but, which SPU depends on the record. Each SPU scans its own portions of
the table and extracts only the needed columns, determines the destination SPU,
and transmits the records across the internal network fabric. The system performs
these operations in parallel across all SPUs.
When the system broadcasts data, it sends every record in the table to every SPU.
Depending on the size of the table and the way the data is distributed, one method
might be more cost-effective than the other.
Related concepts:
“Execution plans” on page 12-28
Verify distribution
When the system creates records, it assigns them to a logical data slice based on
their distribution key value. You can use the datasliceid keyword in queries to
determine how many records are stored on each data slice and thus, whether the
data is distributed evenly across all data slices.
You can also view the distribution from the NzAdmin tool. To view record
distribution for a table, you must have the following object privileges:
v List on the database
v List on the table
Procedure
1. Click the Database tab on the NzAdmin tool main window.
2. In the left pane, click Databases > a database > Tables.
3. In the right pane, right-click a table, then click Record Distribution.
The Record Distribution window displays the distribution of data across all the
data slices in your system for the specific table. The Records column displays
the total number of records, the minimum number of records, the maximum
number of records, the average records per data slice, and the standard
deviation (population computation). The Distribution Columns Section displays
the distribution key columns for the table.
4. To see the specific record count for a data slice, place your cursor over an
individual data slice bar. The system displays the record count and the data
slice identifier in the status bar.
Related reference:
“Indirect object privileges” on page 11-17
The IBM Netezza system controls some objects that are indirectly based on the
privileges that are associated with the object. Objects in this category include user
sessions, transactions, load sessions, and statistics.
Skew can happen while you are distributing or loading the data into the following
types of tables:
Base tables
Database administrators define the tables within databases for the user
data.
Intra-session tables
Applications or SQL users create temp tables.
Related reference:
“Disk space threshold notification” on page 8-22
When you use the commands CREATE TABLE or CREATE TABLE AS, you can
either specify the method or allow the Netezza to select one.
v With the DISTRIBUTE ON (hash) command, you can specify up to four columns
as the distribution key.
v If there is no obvious group of columns that can be combined as the distribution
key, you can specify random distribution. Random distribution means that the
Netezza distributes the data randomly across the data slices.
Random distribution results in the following:
– Reducing skew when you are loading data.
– Eliminating the need to pick a distribution key when you are loading a large
database that has many tables with few rows. In such cases, picking a good
distribution key might have little performance benefit, but it gains the
advantage of a dispersed distribution of data.
– Allowing you to verify a good distribution key by first loading the data
randomly, then by using the GENERATE STATISTICS command, and running
selects on the database columns to get the min/max and counts. With this
information, you can better choose which columns to use for the distribution
key.
– If you do not specify a distribution when you create a table, the system
chooses a distribution key and there is no way to control that choice.
Related concepts:
“Criteria for selecting distribution keys” on page 12-7
With NzAdmin open and while connected to the IBM Netezza system, follow these
steps to locate tables that have a skew greater than a specific threshold:
Procedure
1. Select Tools > Table Skew.
2. In the Table Skew window, specify a threshold value in megabytes and click
OK. The skew threshold specifies the difference between the size of the
smallest data slice for a table and the size of the largest data slice for that table.
As the system checks the tables, NzAdmin displays a wait dialog that you can
cancel at any time to stop the processing.
If any table meets or exceeds the skew threshold value, the NzAdmin tool
displays the table in the window. You can sort the columns in ascending or
descending order.
Results
If no tables meet the skew threshold, the system displays the message: No tables
meet the specified threshold.
The following figure shows a simple model of a table, such as a transaction table.
In its unorganized form, the data is organized by the date and time that each
transaction occurred, and the color indicates a unique transaction. If your queries
on the table most often query by date/time restrictions, those queries run well
because the date/time organization matches the common restrictions of the
queries.
However, if most queries restrict on transaction type, you can increase query
performance by organizing the records by transaction type. Queries that restrict on
transaction type will have improved performance because the records are
organized and grouped by the key restriction; the query can obtain the relevant
records more quickly, whereas they would have to scan much more of the table in
the date/time organization to find the relevant transactions. By organizing the data
in the table so that commonly filtered data is located in the same or nearby disk
extents, your queries can take advantage of zone maps to eliminate unnecessary
disk scans to find the relevant records.
CBTs are most often used for large fact or event tables that can have millions or
billions of rows. If the table does not have a record organization that matches the
types of queries that run against it, scanning the records of such a large table
requires a lengthy processing time as full disk scans can be needed to gather the
relevant records. By reorganizing the table to match your queries against it, you
can group the records to take advantage of zone maps and improve performance.
Zone maps summarize the range of data inside the columns of the records that are
saved in a disk extent; organizing keys help to narrow the range of data within the
extent by grouping the columns that you most often query. If the data is well
organized within the extent and the zone maps have smaller “ranges” of data,
queries run faster because Netezza can skip the extents that contain unrelated data
and direct its resources to processing the data that matches the query.
As a best practice, review the design and columns of your large fact tables and the
types of queries that run against them. If you typically run queries on one
dimension, such as date, you can load the data by date to take advantage of the
zone maps. If you typically query a table by two dimensions, such as by storeId
and customerID for example, CBTs can help to improve the query performance
against that table.
The organizing keys must be columns that can be referenced in zone maps. By
default, Netezza creates zone maps for columns of the following data types:
v Integer (1-byte, 2-byte, 4-byte, and 8-byte)
v Date
v Timestamp
In addition, Netezza also creates zone maps for the following data types if
columns of this type are used as the ORDER BY restriction for a materialized view
or as the organizing key of a CBT:
v Char, all sizes, but only the first 8 bytes are used in the zone map
v Varchar, all sizes, but only the first 8 bytes are used in the zone map
v Nchar, all sizes, but only the first 8 bytes are used in the zone map
v Nvarchar, all sizes, but only the first 8 bytes are used in the zone map
v Numeric, all sizes up to and including numeric(18)
v Float
v Double
v Bool
v Time
v Time with timezone
v Interval
You specify the organizing keys for a table when you create it (such as using the
CREATE TABLE command), or when you alter it (such as using ALTER TABLE).
When you define the organizing keys for a table, Netezza does not automatically
reorganize the records; you use the GROOM TABLE command to start the
reorganization process.
You can add to, change, or drop the organizing keys for a table by using ALTER
TABLE. The additional or changed keys take effect immediately, but you must
groom the table to reorganize the records to the new keys. You cannot drop a
column from a table if that column is specified as an organizing key for that table.
The GROOM TABLE command processes and reorganizes the table records in each
data slice in a series of steps. Users can do tasks such as SELECT, UPDATE,
DELETE, and INSERT operations while the online data grooming is taking place.
The SELECT operations run in parallel with the groom operations; the INSERT,
UPDATE, and DELETE operations run serially between the groom steps. For CBTs,
the groom steps are slightly longer than for non-CBT tables, so INSERT, UPDATE,
and DELETE operations might pend for a longer time until the current step
completes.
Note: When you specify organizing keys for an existing table to make it a CBT, the
new organization can affect the compression size of the table. The new
organization can create sequences of records that improve the overall compression
benefit, or it can create sequences that do not compress as well. Following a groom
operation, your table size can change somewhat from its size compared to the
previous organization.
Database statistics
For the system to create the best execution plan for a query, it evaluates what it
knows about the database tables that it accesses. Without up-to-date statistics, the
system uses internal, default values that are independent of the actual table and
which result in suboptimal queries with long run times.
The GENERATE STATISTICS command collects this information. If you have the
GenStats privilege, you can run this command on a database, table, or individual
columns. By default, the admin user can run the command on any database (to
process all the tables in the database) or any individual table.
The admin user can assign other users this privilege. For example, to give user1
privilege to run GENERATE STATISTICS on one or all tables in the DEV database,
the admin user must grant user1 LIST privilege on tables in the system database,
and GENSTATS from the dev database, as in these sample SQL commands:
SYSTEM(ADMIN)=> GRANT LIST ON TABLE TO user1;
DEV(ADMIN)=> GRANT GENSTATS ON TABLE TO user1;
For more information about the GenStats privilege, see Table 11-1 on page 11-10.
The following table describes the nzsql command syntax for these cases.
Table 12-4. Generate statistics syntax
Description Syntax
A database (all tables) GENERATE STATISTICS;
A specific table (all columns) GENERATE STATISTICS ON table_name;
Individual columns in a table GENERATE STATISTICS ON my_table(name, address,
zip);
The Netezza system maintains certain statistics when you perform database
operations.
v When you use the CREATE TABLE AS command, the system maintains the
min/max, null, and estimated dispersion values automatically.
v When you use the INSERT or UPDATE commands, the system maintains the
min/max values for all non-character fields.
The following table describes when the Netezza automatically maintains table
statistics.
Table 12-5. Automatic Statistics
Dispersion
Command Row counts Min/Max Null (estimated) Zone maps
CREATE TABLE AS yes yes yes yes yes
INSERT yes yes no no yes
DELETE no no no no no
UPDATE yes yes no no yes
GROOM TABLE no no no no yes
For more information about the GENERATE STATISTICS command, see the IBM
Netezza Database User’s Guide.
Related concepts:
“Groom tables” on page 12-20
JIT statistics are not run on system tables, external tables, or virtual tables. JIT
statistics improve selectivity estimations when a table contains data skew or when
there are complex column/join restrictions. The system also uses JIT statistics to
avoid broadcasting large tables that were estimated to be small based on available
JIT statistics use sampler scan functionality and zone map information to
conditionally collect several pieces of information:
v The number of rows that are scanned for the target table
v The number of extents that are scanned for the target table
v The number of maximum extents that are scanned for the target table on the
data slices with the greatest skew
v The number of rows that are scanned for the target table that apply to each join
v The number of unique values for any target table column that is used in
subsequent join or group by processing
This information is conditionally requested for and used in estimating the number
of rows that result from a table scan, join, or “group by” operation.
Note: JIT statistics do not eliminate the need to run the GENERATE STATISTICS
command. While JIT statistics help guide row estimation, there are situations
where the catalog information calculated by GENERATE STATISTICS is used in
subsequent calculations to complement the row estimations. Depending on table
size, the GENERATE STATISTICS process does not collect dispersion because the
JIT statistics scan estimates it on-the-fly as needed.
The system automatically runs JIT statistics for user tables when it detects the
following conditions:
v Tables that contain more than 5,000,000 records.
v Queries that contain at least one column restriction.
v Tables that participate in a join or have an associated materialized view. JIT
statistics are integrated with materialized views to ensure that the exact number
of extents is scanned.
The system runs JIT statistics even in EXPLAIN mode. To check whether JIT
statistics were run, review the EXPLAIN VERBOSE output and look for cardinality
estimations that are flagged with the label JIT.
Zone maps
Zone maps are automatically generated internal tables that the IBM Netezza
system uses to improve the throughput and response time of SQL queries against
large grouped or nearly ordered date, timestamp, byteint, smallint, integer, and
bigint data types.
Zone maps reduce disk scan operations that are required to retrieve data by
eliminating records outside the start and end range of a WHERE clause on
restricted scan queries. The IBM Netezza Storage Manager uses zone maps to skip
portions of tables that do not contain rows of interest and thus reduces the number
of disk pages and extents to scan and the search time, disk contention, and disk
I/O.
In release 7.2.1, Netezza adds the table-oriented zone map feature that organizes
the zone maps such that the zone map statistics for all columns from the same
table are stored together in the dataslice. Table-oriented zone maps are optimized
for improved performance of incremental (or trickle) loads. Table-oriented zone
maps also minimize interactions between concurrent loads to a subset of tables in
the system, from queries running against a different subset of tables. The zone map
data remains the same, so table-oriented zone maps provide the same benefit for
query performance.
The zone map format is a system-wide setting. For new systems that are initialized
with NPS 7.2.1, the default is to use table-oriented zone maps for new user tables.
When you upgrade an NPS appliance to release 7.2.1, the existing column-oriented
zone maps are preserved.
You can convert back to column-oriented zone maps using the command
nzzonemapformat -column. You must pause the system to convert back to
column-oriented zone maps as well.
If your system uses table-oriented zone maps, you cannot downgrade to a release
before 7.2.1 until you convert the zone maps back to column-oriented format. The
downgrade process stops with a message that zone maps must be converted back
to column-oriented zone maps if table-oriented zone maps are present. After
performing the zone map conversion using the nzzonemapformat utility, run the
downgrade again.
Ordered table data might contain rolling history data that represents many months
or years of activity. A rolling history table typically contains many records. Each
day, the new data is inserted and the data for the oldest day is deleted. Because
these rows are historical in nature, they are rarely, if ever, modified after insertion.
Typically, users run queries against a subset of history such as the records for one
week, one month, or one quarter. To optimize query performance, zone maps help
to eliminate scans of the data that is outside the range of interest.
Groom tables
As part of your routine database maintenance activities, plan to recover disk space
that is occupied by outdated or deleted rows. In normal IBM Netezza operation, an
update or delete of a table row does not remove the old tuple (version of the row).
This approach benefits multiversion concurrency control by retaining tuples that
can potentially be visible to other transactions. Over time however, the outdated or
deleted tuples are of no interest to any transaction. After you capture them in a
backup, you can reclaim the space that they occupy by using the SQL GROOM
TABLE command.
Note: Starting in Release 6.0, you use the GROOM TABLE command to maintain
the user tables by reclaiming disk space for deleted or outdated rows, and to
reorganize the tables by their organizing keys. The GROOM TABLE command
processes and reorganizes the table records in each data slice in a series of steps.
Users can do tasks such as SELECT, UPDATE, DELETE, and INSERT operations
while the data grooming is taking place. The SELECT and INSERT operations run
in parallel with the groom steps; any UPDATE and DELETE operations run serially
between the groom steps. For details about the GROOM TABLE command, see the
IBM Netezza Database User’s Guide.
Keep in mind the following when you groom tables to reclaim disk space:
v Groom tables that receive frequent updates or deletes more often than tables that
are seldom updated.
v If you have a mixture of large tables, some of which are heavily updated and
others that are seldom updated, you might want to set up periodic tasks that
routinely groom the frequently updated tables.
v Grooming deleted records has no effect on your database statistics because the
process physically removes records that were already “logically” deleted. When
you groom a table, the system leaves the min/max, null, and estimated
dispersion values unchanged.
v Reclaiming records does affect where the remaining records in the table are
located. The system updates the zone map accordingly.
v If you truncated a table and there are in-flight transactions that started before
the TRUNCATE query, note that the groom process does not reclaim the
truncated rows until after the last in-flight transaction has committed or aborted.
Tip: When you delete all the contents of a table, consider using the TRUNCATE
command rather than the DELETE command, which eliminates the need to run the
GROOM TABLE command.
Related concepts:
“GENERATE STATISTICS command” on page 12-17
The script lists any CBTs in the specified databases that have 960 or more
ungroomed or empty pages in any one data slice. The script outputs the SQL
commands that identify the databases and the GROOM TABLE commands for any
CBTs that meet the groom threshold.
You can run these commands from a command line, or output the command to a
file that you can use as an input script to the nzsql command. You can use script
options to specify the databases to search and the threshold for the number of
ungroomed pages in a data slice.
The script can take a few minutes to run, depending on the number of databases
and tables that it checks. If the command finds no CBTs that meet the groom
threshold criteria, the command prompt displays with no command output. As the
sample shows, one CBT meets the user-supplied threshold criteria.
If the script reports any CBTs that would benefit from a groom, you can connect to
each database and run the GROOM TABLE command manually for the specified
table. Or, you can also direct the output of the command to a file that you can use
as an input script to the nzsql command. For example:
[nz@nzhost tools]$ ./cbts_needing_groom -alldbs -th 400 >/export/home/nz/testgrm.sql
[nz@nzhost tools]$ nzsql -f /export/home/nz/testgrm.sql
You are now connected to database "my_db".
nzsql:/export/home/nz/testgrm.sql:2: NOTICE: Groom processed 4037
pages; purged 0 records; scan size shrunk by 1 pages; table size shrunk
by 1 extents.
GROOM ORGANIZE READY
Related concepts:
“Organization percentage”
Organization percentage
When you view the status information for tables, such as using the NzAdmin
interface or various system views, the information contains an organization
percentage. For clustered base tables (CBTs), the organization percentage shows the
percentage of the table data that is organized based on the specified organizing
keys for that table. Organized tables typically have a 100% organization
percentage, while tables that are not yet organized have a 0% percentage. The
organization percentage does not apply to tables which are not CBTs.
After you specify the organizing keys for the table and load its data, you typically
run a GROOM TABLE RECORDS ALL command to reorganize the data according
to the keys. After you insert any new data in the table, update the organization by
using the GROOM TABLE RECORDS READY command to ensure that the
organization is up-to-date.
For example:
1. Run a full backup.
2. Delete data B.
3. Run an incremental backup, which captures data B marked as deleted.
4. Delete D.
5. Run GROOM TABLE, which removes B but not D, because D was not captured
in the last backup.
If you maintain two backup sets for a database and you do not want the GROOM
TABLE command to use the default backup set, you can use the backupset option
to specify another backup set. Run the backup history report to learn the ID of the
backup set you want to specify.
Session management
A session represents a single connection to an IBM Netezza appliance.
You must be the administrator or have the appropriate permissions to show and
manage sessions and transactions. Also, you cannot use a Release 5.0 nzsession
client command to manage sessions on an IBM Netezza system that is running a
release before 5.0.
Related reference:
“The nzsession command” on page A-49
Use the nzsession command to view and manage sessions.
You can be logged in as any database user to use the nzsession show command;
however, some of the data that is displayed by the command can be obscured if
your account does not have correct privileges. The admin user can see all the
information.
If you are a database user who does not have any special privileges, information
such as the user name, database, client PID, and SQL command display only as
asterisks:
nzsession show -u user1 -pw pass
ID Type User Start Time PID Database Schema State Priority
Client IP Client Command
Name
PID
----- ---- ----- ----------------------- ----- --------- ------ ------ --------
--------- ------ ------------------------
43826 sql ***** 24-Feb-13, 16:49:18 EST 14840 ***** ***** active normal
***** *****
43876 sql iser1 24-Feb-13, 16:54:27 EST 17257 SYSTEM ***** active normal
127.0.0.1 17256 SELECT session_id, clien
To abort a session, enter: nzsession abort -u admin -pw password -id 7895
Important: Do not abort system sessions because it can cause your system to fail
to restart.
You can abort SQL, client, load, backup, and restore sessions. To abort transaction
SID 31334, enter the following command:
nzsession abortTxn -u admin -pw password -id 31334
Transactions
A transaction is a series of one or more operations on database-related objects,
data, or both.
The following activities do not count against the read/write transaction limit:
v Committed transactions
v Transactions that have finished rolling back
v SELECT statements that are not inside a multi-statement transaction
v Transactions that create or modify temporary tables only, or modify only tables
that are created within the same transaction (for example, CREATE TABLE ...AS
SELECT...)
The Netezza system does not use conventional locking to enforce consistency
among concurrently running transactions, but instead uses a combination of
multi-versioning and serialization dependency checking.
v With multi-versioning, each transaction sees a consistent state that is isolated
from other transactions that have not been committed. The Netezza hardware
ensures that the system can quickly provide the correct view to each transaction.
v With serialization dependency checking, nonserializable executions are
prevented. If two concurrent transactions attempt to modify the same data, the
system automatically rolls back the youngest transaction. This is a form of
optimistic concurrency control that is suitable for low-conflict environments such
as data warehouses.
The system responds as follows for an implicit transaction that fails serialization:
v The system waits for the completion of the transaction that caused the
serialization conflict.
v After that transaction finishes, either by commit or abort, the system resubmits
the waiting requests.
The system saves and queues implicit transactions for up to 360 minutes (the
default). If an implicit transaction waits for more than 360 minutes (or six hours),
the transaction fails and returns the error message ERROR: Too many concurrent
transactions. You can modify this default timeout setting in two ways:
v To set the value for the current session, issue the following command:
SET serialization_queue_timeout = <number of minutes>
v To make the setting global, set the variable serialization_queue_timeout in
postgresql.conf.
A read-only explicit transaction that issues only SELECT statements also queues
unless a SET SESSION READ ONLY is executed in the session before the BEGIN
statement.
This queuing behavior is a change from previous IBM Netezza releases, where a
BEGIN that encounters 63 concurrent read/write transactions is accepted by
Netezza but the client transaction is forced to be read-only. If you want to continue
using the previous behavior, issue SET begin_queue_if_full = false in the session
before you issue the BEGIN statement. Keep in mind that if a statement that
modifies non-temporary data is issued by such a transaction, the statement fails
and is not queued.
The system might redistribute data for joins, grouping aggregates, create tables, or
when loading. Decisions about redistribution are made by the planner and are
based on costs like expected table sizes. (The planner tries to avoid redistributing
large tables because of the performance impact.)
Note: Review the plan if you want to know whether the planner redistributed or
broadcast your data. The EXPLAIN VERBOSE command displays the text:
download (distribute or broadcast).
The optimizer can also dynamically rewrite queries to improve query performance.
Many data warehouses use BI applications that generate SQL that is designed to
run on multiple vendors' databases. The portability of these applications is often at
the expense of efficient SQL. The SQL that the application generates does not take
advantage of the vendor-specific enhancements, capabilities, or strengths. Hence,
the optimizer might rewrite these queries to improve query performance.
Execution plans
The optimizer uses the following statistics to determine the optimal execution plan
for queries:
v The number of rows in the table
v The number of unique or distinct values of each column
v The number of NULLs in each column
v The minimum and maximum of each column
For the optimizer to create the best execution plan that results in the best
performance, it must have the most up-to-date statistics.
Related concepts:
“Dynamic redistribution or broadcasts” on page 12-8
If your database design or data distribution precludes you from distributing certain
tables on the join key (column), the IBM Netezza system dynamically redistributes
or broadcasts the required data.
“Database statistics” on page 12-15
You can also use the NzAdmin tool to display information about queries.
Note: This version of the query history and status feature is provided for
compatibility with earlier releases and will be deprecated in a future release.
You do not need to be the administrator to view these views, but you must have
been granted List permission for the users who ran the queries and the database
objects to see the history records.
For example, to grant admin1 permission to view bob’s queries on database emp,
use the following SQL commands:
GRANT LIST ON bob TO admin1;
GRANT LIST ON emp TO admin1;
You can also use the nzstats command to view the Query Table and Query
History Table. For more information, see Table 16-12 on page 16-8 and Table 16-13
on page 16-9.
The following table lists the _v_qrystat view, which lists active queries.
Table 12-8. The _v_qrystat view
Columns Description
Session Id The ID of the session that initiated this query.
Plan Id The internal ID of the plan that is associated with this query.
Client Id The internal client ID associated with this query.
Client IP address The client IP address.
SQL statement The SQL statement. (Note that the statement is not truncated as it is
when you view a statement using the nzstats command.)
State The state number.
Submit date The date and time the query was submitted.
Start date The date and time the query started running.
Priority The priority number.
Priority text The priority of the queue when submitted (normal or high).
Estimated cost The estimated cost, as determined by the optimizer. The units are
thousandths of a second, that is, 1000 equals one second.
Estimated disk The estimated disk usage, as determined by the optimizer.
Estimated mem The estimated memory usage, as determined by the optimizer.
Snippets The number of snippets in the plan for this query.
Current Snippet The current snippet the system is processing.
The following table describes the _v_qryhist view, which lists recent queries.
Table 12-9. The _v_qryhist view
Columns Description
Session Id The ID of the session that initiated this query.
Plan Id The internal ID of the plan that is associated with this query.
Client Id The internal client ID associated with this query.
Client IP address The client IP address.
DB name The name of the database the query ran on.
User The user name.
SQL statement The SQL statement. (Note that the statement is not truncated as it is
when you view a statement using the nzstats command.)
Submit date The date and time the query was submitted.
Start date The date and time the query started running.
End date The date and time that the query ended.
Priority The priority number.
Priority text The priority of the queue when submitted (normal or high).
Estimated cost The estimated cost, as determined by the optimizer.
Estimated disk The estimated disk usage, as determined by the optimizer.
Estimated mem The estimated memory usage, as determined by the optimizer.
Snippets The number of snippets in the plan for this query.
Snippet done The number of snippets that have completed.
Result rows The number of rows in the result.
Result bytes The number of bytes in the result.
Related concepts:
Chapter 14, “History data collection,” on page 14-1
A Netezza system can be configured to capture information about user activity
such as queries, query plans, table access, column access, session creation, and
failed authentication requests. This information is called history data. Database
administrators can use history data to gain insight to usage patterns.
Important: As a best practice, make sure that you schedule regular backups of
your user databases and your system catalog to ensure that you can restore your
Netezza system. Make sure that you run backups before and after major system
changes so that you have “snapshots” of the system before and after those
changes. A regular and current set of backups can protect against loss of data that
can occur because of events such as disaster recovery, hardware failure, accidental
data loss, or incorrect changes to existing databases.
The following table lists the differences among the backup and restore methods.
Table 13-1. Choose a backup and restore method
nzbackup and Compressed Text format
Feature nzrestore external tables external tables
Object definition backup U - -
Full automatic database backup U - -
Manual per-table backup - U U
Manual per-table restore U U U
TM TM
Veritas NetBackup U - -
IBM Spectrum Protect (formerly U - -
Tivoli® Storage Manager)
® ®
EMC NetWorker U - -
Automatic incremental U - -
Compressed internal format U U -
a
This method usually takes more time to complete than the compressed internal
format backups and loads.
Note: The backup and restore processes and messages often use the term schema
in the context of a definition, that is, the definition of database, its objects, and the
access privileges granted within the database. In these sections, the term schema
does not mean the schema object inside a database (for systems that support
multiple schemas in a database). The Netezza backup and restore features do not
support the ability to back up a database schema (that is, the object).
The CREATE EXTERNAL TABLE command and the procedures for using external
tables are described in detail in the IBM Netezza Data Loading Guide.
The Netezza backup processes do not back up host software such as the Linux
operating system files or any applications that you might install on the Netezza
host, such as the IBM Netezza Performance Portal client. If you accidentally
remove files in the IBM Netezza Performance Portal installation directories, you
can reinstall the IBM Netezza Performance Portal client to restore them. If you
accidentally delete Linux host operating system or firmware files, contact Netezza
Support for assistance in restoring them.
The Netezza backup and restore operations can use network file system locations
and several third-party solutions such as IBM Spectrum Protect (formerly Tivoli
Storage Manager), Veritas NetBackup, and EMC NetWorker as destinations.
Note: Throughout these topics, the guide uses the terms Tivoli Storage Manager
and TSM to refer to the applications, clients, and features of the IBM Spectrum
Protect product family.
Database completeness
The standard backup and restore by using the nzbackup and nzrestore commands
provide transactionally consistent, automated backup and restore of the definitions
and data for all objects of a database, including ownership and permissions for
objects within that database. You can use these commands to back up and restore
an entire database, and to restore a specific table in a database.
The nzrestore command requires the database to be dropped or empty before you
restore the database. Similarly, before you restore a table, you must first drop the
table or use the -droptables option to allow the command to drop a table that is
going to be restored.
Portability
Before you run a backup, consider where you plan to restore the data. For
example, if you are restoring data to the same IBM Netezza system or to another
Netezza system (which can be a different model type or have a later software
release), use the compressed internal format files that are created by the nzbackup
command. The compressed internal format files are smaller and often load more
quickly than text external format files. You can restore a database that is created on
one Netezza model type to a different Netezza model type, such as a backup from
an IBM Netezza 1000-6 to a 1000-12, if the destination Netezza has the same or
later Netezza release. A restore runs slower when you change the destination
model type because the host on the target system must process and distribute the
data slices according to “data slices to SPU” topology on the target model.
When you are transferring data to a new Netezza system or when you are
restoring row-secure tables, use the nzrestore -globals operation to restore the
user, groups, and privileges (that is, the access control and security information)
first before you restore the databases and tables. If the security information
required by a row-secure table is not present on the system, the restore process
exits with an error. For more information about multi-level security, see the IBM
Netezza Advanced Security Administrator's Guide.
If you plan to load the Netezza data to a different system (that is, a non-Netezza
system), the text format external tables are the most portable. Data in text external
tables can be read by any product that can read text files, and can be loaded into
any database that can read delimited text files.
Note: The term compression in the database backup and restore context refers to the
compressed internal format of external tables, which is different from the
compressed data blocks and tables that are created by the Compress Engine.
Throughout this section, compression refers to the compressed internal format.
A compressed binary format external table (also known as an internal format table)
is a proprietary format which typically yields smaller data files, retains information
about the IBM Netezza topology, and thus is often faster to back up and restore.
The alternative to compressed binary format is text format, which is a
non-proprietary external table format that is independent of the Netezza topology,
but yields larger files and can be slower to back up and restore.
The different backup and restore methods handle data compression in the
following manner:
v When you use the standard backup by using the nzbackup and nzrestore
commands, the system automatically uses compressed external tables as the data
transfer mechanism.
v When you use compressed external table unload, the system compresses the
data and only uncompresses it when you reload the data.
Use manually created external compressed tables for backup when you want
table-level backup or the ability to send data to a named pipe, for example,
when you use a named pipe with a third-party backup application.
v When you use text format unload, the data is not compressed. For large tables, it
is the slowest method and the one that takes up the most storage space.
Multi-stream backup
An IBM Netezza backup can be a multi-stream operation.
If you specify several file system locations, or if you use third-party backup tools
that support multiple connections, the backup process can parallelize the work to
send the data to the backup destinations. Multi-stream support can improve
backup performance by reducing the time that is required to transfer the data to
the destination.
Important: Make sure that you have information about your backup destinations.
If your Netezza database backup will be stored on multiple locations/directories
managed by different backup devices, multi-stream backups can help to improve
the data transfer rates to those different destinations. If you use multiple streams,
make sure that your storage device can accept multiple streams in parallel
efficiently so that you can avoid situations where multiple streams might contend
for a single shared resource. Consult with your backup storage administrator to
determine whether multi-stream backups are appropriate for your environment.
By default, a multi-stream nzbackup process waits up to five minutes for all of the
resources to become available to service all the requested streams. (The
host.bnrStreamInitTimeoutSec registry setting specifies the amount of time for the
nzbackup process to wait for resources to become active.) For example, a
Only backup operations that transfer table data support multiple destinations and
streams. These operations include full and incremental backups. Other operations
(for example, -noData backup, -globals backup, and the reports) use only a single
destination and a single stream, even if you specify multiple destinations.
By default, nzbackup uses one stream for each specified filesystem destination, or
only one stream only for backups that do not specify the -dir option. If you set
the host.bnrNumStreamsDefault registry setting to a nonzero value, you can
specify a different streams default that will be used for all applicable nzbackup
operations. You can also specify an explicit stream count for an nzbackup command
by using the -streams parameter. If you specify a nonzero stream number for the
registry setting, you can use nzbackup -streams AUTO to explicitly specify that the
backup should use the default one stream per filesystem or one stream per
connector behavior.
Multi-stream restore
Starting in NPS Release 7.2, the database restore can be a multi-stream operation.
A multi-stream restore can help to reduce the time required to restore from a
Netezza database backup archive. By default, nzrestore detects and uses the
stream count that was specified for the backup archive that you are restoring on a
per-increment basis.
To set a different default stream count for all applicable nzrestore operations, use
the host.bnrRestoreStreamsDefault registry setting and change it to a nonzero
value. You could also set an explicit stream count for a nzrestore command by
using the -streams parameter to override the registry setting. If you specify a
nonzero stream number for the registry setting, you can use nzrestore -streams
AUTO to explicitly specify that the backup stream count should be used.
Special columns
The backup and restore method that you use affects how the system retains
specials. The term specials refers to the end-user-invisible columns in every table
that the system maintains. The specials include rowid, datasliceid, createxid, and
deletexid.
The following table describes how the backup method affects these values.
Table 13-3. Retaining specials
Compressed external Text format external
Special nzbackup and nzrestore tables tables
rowid Retain Retain Not unloaded
datasliceid Retain when the system Retain when the model Recalculate
model size stays the size stays the same,
same, otherwise otherwise recalculates.
recalculates.
createxid Receive the transaction ID Receive the transaction Receive the transaction
of transaction that ID of transaction that ID of transaction that
performs the restore. performs the restore. performs the restore.
deletexid Set to zero. Set to zero. Set to zero.
Note: Starting in Release 6.0.x, the nzrestore process no longer supports the
restoring of backups that are created with NPS Release 2.2 or earlier.
You can then reload those records into the original source table or a new table that
has the same table schema. For more information, see the IBM Netezza Data Loading
Guide.
When you back up the user and group information, the backup set saves
information about the password encryption. If you use a custom host key, the host
key is included in the backup set to process the account passwords during a
restore. The backup process stores an encrypted host key by using the default
encryption process, or you can use the nzbackup -secret option to encrypt the
host key by using a user-supplied string. To restore that backup set, an
administrator must specify the same string in the nzrestore -secret option. To
protect the string, it is not captured in the backup and restore log files.
The -secret option is not required. If you do not specify one, the custom host key
is encrypted by using the default encryption process. Also, the -secret option is
ignored if you do not use a custom host key for encrypting passwords on your
system.
If one of the destinations fills and has no free disk space, the backup process
automatically suspends write activity to that location and continues writing to the
other destinations without loss of data.
If you configure the destinations on unique disk devices that each offer good
performance and bandwidth, the database backups typically take less time to
complete than when you save the backup to only one of those file system
locations. It is important to choose your destinations carefully; for example, if you
choose two file system locations that are on the same disk, there is no performance
gain because the same disk device is writing the backup data. Also, differences in
the write-rate of each file system destination can result in varying completion
times.
You can specify the list of locations by using the nzbackup -dir option, or you can
create a text file of the locations and specify it in the nzbackup -dirfile command.
Similarly, when you restore backups that are saved on file systems, you specify the
locations where the backups are by using the nzrestore -dir option, or you can
create a text file of the locations and specify it in the nzrestore -dirfile
command. The restore can also be a multi-stream process to improve the
performance and time to restore.
As a best practice for disaster recovery procedures, store backups on systems that
are physically and geographically separated from the system that they are
designed to recover. Make sure that you have sufficient information about your
backup destinations so that you can choose locations that offer the best results for
capacity and data transfer rates of the backup archives.
The IBM Netezza system currently offers support for the following backup and
recovery solutions:
v Veritas NetBackup
v IBM Spectrum Protect (formerly Tivoli Storage Manager)
v EMC NetWorker
To use these solutions, you typically install some client software for the solution
onto the Netezza host, and then configure some files and settings to create a
connection to the third-party server. You might also complete some configuration
steps on the third-party server to identify and define the Netezza host as a client to
that server. The installation and configuration steps vary for each solution.
You can use the NetBackup and IBM Spectrum Protect interfaces to schedule and
perform all supported Netezza backup and restore operations. You do not have to
log on to the Netezza host to perform a backup operation, or to write the backup
archive to the Netezza host disk or a Netezza mount.
The sections which describe the nzbackup and nzrestore commands also describe
how some of the command options work with the supported storage manager
solutions.
Related concepts:
“File system connector for backup and recovery” on page 13-7
The IBM Netezza system provides backup connectors that you can use to direct
your backups to specific locations such as network file system locations or to a
third-party backup and recovery solution.
“Veritas NetBackup connector” on page 13-34
The Veritas NetBackup environment includes a NetBackup server, one or more
media servers, and one or more client machines. The IBM Netezza host is a
NetBackup client machine.
“IBM Spectrum Protect (formerly Tivoli Storage Manager) connector” on page
13-42
You can use the IBM Netezza host to back up data to and restore data from
devices that are managed by an IBM Spectrum Protect server. This section
describes how to install, configure, and use the integration feature.
“EMC NetWorker connector” on page 13-61
This section describes how to back up and restore data on an IBM Netezza system
by using the EMC NetWorker connector for the Netezza appliance.
In the rare situations when a Netezza host server or disk fails, but the SPUs and
their disks are still intact, you can restore the /nz/data directory (or the directory
you use for the Netezza data directory) from the host backup without the
additional time to restore all of the databases. This option works best when you
have a host backup that is current with the latest database content and access
settings.
The nzhostbackup command pauses the system while it runs; this allows it to
checkpoint and archive the current /nz/data directory. The command resumes the
system when it completes. The command typically takes only a few minutes to
complete. Run database and host backups during a time when the Netezza system
is least busy with queries and users.
Important: Keep the host backups synchronized with the current database and
database backups. After you change the catalog information, such as by adding
new user accounts, new objects such as synonyms or tables, altering objects,
dropping objects, truncating tables, or grooming tables, use the nzhostbackup
command to capture the latest catalog information. Also, update your database
backups.
The nzhostrestore command pauses the system before it starts the restore, and
resumes the system after it finishes.
The nzhostrestore command synchronizes the SPUs with the restored catalog on
the host; as a result, it rolls back any transactions that occurred after the host
backup. The host restore operation cannot roll back changes such as drop table,
truncate table, or groom operations. If these changes occurred after the host
backup was made, the host restore might cause those affected tables to be in an
inconsistent state. Inspect the data in those tables, and if necessary, reload the
tables to match the time of the host backup.
After you use the nzhostrestore command, you cannot run an incremental backup
on the database; you must run a full backup first.
After the restore, the hardware IDs for the SPUs and disks typically change;
however, their location and roles remain the same as they were before the host
restore. A failed SPU can become active again after a host restore.
If any tables were created after the host backup, the nzhostrestore command
marks these tables as “orphans,” which means that they are inaccessible but they
consume disk space. The nzhostrestore command checks for orphaned tables and
creates a script that you can use to drop orphaned user tables. The nzhostrestore
command also rolls back the data on the SPUs to match the transaction point of
the catalog in the host backup.
Related reference:
“The nzhostrestore command” on page A-26
Use the nzhostrestore command to restore your IBM Netezza data directory and
metadata.
You can pass parameters to the nzbackup command directly on the command line,
or you can set some parameters as part of your environment. For example, you can
set the NZ_USER or NZ_PASSWORD environment variables instead of specifying -u or
-pw on the command line.
To back up a database, you must have backup privilege. If you attempt to back up
a database in which tables are being reclaimed, the backup process waits until the
reclaim finishes.
The directories that you specify are the root for all backups. -dir /home/backup1
The system manages the backups in the subdirectories within /home/backup2/
each root directory. /home/backup3/
-dirfile Specifies a file with a list of backup target directories, one per -dirfile
line. /home/
mybackuptargetlist
If you have both the 32-bit and 64-bit clients installed for either
TSM, NetBackup or Networker, the system defaults to using
the 32-bit client. You can force the system to use the 64-bit
client by specifying the name (tsm6-64, netbackup7-64, or
networker7-64) in the -connector option. If only the 64-bit
client is installed, the system uses the 64-bit client to perform
the backup.
If you specify this option, you cannot specify -db. For more
information, see “Back up and restore users, groups, and
permissions” on page 13-21.
-u username Specifies the Netezza user name to connect to the database. -u user_1
The default backup set is the most recent backup set of the
database you specify. You can override the default by using
this option.
By default, the nzbackup command uses the values of the environment variables
NZ_DATABASE, NZ_USER, and NZ_PASSWORD, unless you specify values for -db for
NZ_DATABASE, -u for NZ_USER, and -pw for NZ_PASSWORD.
Backup errors
You must have the Backup privilege to back up a database. The Backup privilege
operates at the database level.
You can grant a global Backup privilege for the user to back up any database, or
you can grant a Backup privilege for the user to back up a specific database.
Related concepts:
“The nzbackup command” on page 13-11
Use the nzbackup command to back up a database, including all schema objects
and all table data within the database.
Procedure
1. Run nzsql and connect to the database you want to allow the user to back up
by entering: nzsql db1.
2. Create a user user_backup with password password. For example:
DB1.SCHEMA(ADMIN)=> CREATE USER user_backup WITH PASSWORD ’password’;
3. Grant backup privilege to user_backup. For example:
DB1.SCHEMA(ADMIN)=> GRANT BACKUP TO user_backup;
To grant a user privilege to back up all databases, complete the following steps.
Procedure
1. Run nzsql and connect to the system database by entering: nzsql system
2. Create a user user_backup with password password: For example,
MYDB.SCH1(USER)=> CREATE USER user_backup WITH PASSWORD ’password’;
3. Grant backup privilege to user_backup:
MYDB.SCH1(USER)=> GRANT BACKUP TO user_backup;
For example, if you performed a full backup and then a differential backup on the
database Orders, the directory structure would look as follows:
Netezza/NPSProduction/Orders/20061120120000/1/FULL
Netezza/NPSProduction/Orders/20061121120000/2/DIFF
Backup and restore both use this directory structure. Backup uses it to find backup
sets with which to associate an incremental. Restore uses it to derive incremental
restore sequences.
The backup process finds the most recent backup set for a database for incremental
backup (unless you override the backup set). The restore process finds the most
recent backup set for -db or -sourcedb, and current host or -npshost. You can
override the most recent backup set by using the nzrestore command options
-sourcedb, -npshost, or -backupset.
The most recent backup set for backup or restore is the most recently begun
backup set, or the most recent full backup.
If you move the backup archives from one storage location to another, you must
maintain the directory structure. If you want to be able to run an automated
restore, all the backup increments must be accessible.
The following figure shows sample backups, beginning with a full backup (shown
as the letter A), then a series of differential (C) and cumulative (B) backups.
The backups in the previous figure comprise a backup set, which is a collection of
backups that are written to a single location that consists of one full backup and
any number of incremental backups.
Differential backup
After you run a full backup on a database, you can run an incremental backup to
capture any changes made since the last backup. To run an incremental differential
backup on the IBM Netezza host, you use the nzbackup command and specify the
-differential option.
The following is the syntax for a differential backup that is written to the
NetBackup application:
nzbackup -db <db_name> -differential -connector netbackup
The following is the syntax for a cumulative backup that is written to the
NetBackup application:
nzbackup -db <db_name> -cumulative -connector netbackup
Restriction: After you use the nzhostrestore command, you cannot run an
incremental backup on the database; you must run a full backup first.
You can access the Backup History report in several ways: by using the nzbackup
-history command, by using the NzAdmin tool, or the IBM Netezza Performance
Portal interface. This section describes how to use the nzbackup command; for
details on the interfaces, see the online help for NzAdmin and IBM Netezza
Performance Portal.
Your IBM Netezza user account must have appropriate permissions to view
backup history for databases:
v If you are the admin user, you can view all entries in the backup history list.
v If you are not the admin user, you can view entries if you are the database
owner, or if you have backup or restore privileges for the database.
The following is the syntax to display the backup history for a database:
nzbackup -history -db name
Database Backupset Seq # OpType Status Date Log File
-------- -------------- ----- ------- --------- ------------------- ----------------------
SQLEXT 20090109155818 1 FULL COMPLETED 2009-01-09 10:58:18 backupsvr.9598.2009-01-09.log
You can further refine your results by using the -db and -connector options, or use
the -v option for more information. You use the -db option to see only the history
of a specified database.
Related concepts:
To back up all users, groups, global permissions, specify nzbackup -globals. The
nzbackup -globals command backs up all users and groups regardless of whether
they are referenced by any permission grants within a database, and any security
categories, cohorts, and levels for multi-level security. The system also backs up all
global-level permissions that are not associated with particular databases. The
system does not back up permissions that are defined in specific databases. Those
permissions are saved in the regular database backups for those databases.
For example, suppose that you have four users (user1 to user4) and you grant
them the following permissions:
nzsql
SYSTEM.ADMIN(ADMIN)=> GRANT CREATE TABLE TO user1;
SYSTEM.ADMIN(ADMIN)=> \c db_product
DB_PRODUCT.SCH(ADMIN)=> GRANT CREATE TABLE TO user2;
DB_PRODUCT.SCH(ADMIN)=> GRANT LIST ON TABLE TO user3;
DB_PRODUCT.SCH(ADMIN)=> GRANT LIST ON emp TO user4;
User1 has global Create Table permission, which allows table creation in all
databases on the IBM Netezza system. User2 and User3 have Create and List
permission to tables in the db_product database. User4 has List permission only to
the emp table in the database db_product.
The following table describes the results when you invoke the nzbackup and
nzrestore commands with different options.
Table 13-7. Backup and Restore Behavior
Method User backed up/restored Permission backed up/restored
nzbackup/nzrestore -db user2 CREATE tables in the
db_product db_product database.
user3 LIST on all tables in the
db_product database.
user4 LIST on the emp table in the
db_product database.
nzbackup/nzrestore user1 CREATE tables in the system
-globals database.
user2
user3
user4
By using the nzrestore -globals command, you can restore users, groups, and
permissions. The restoration of users and groups is nondestructive, that is, the
system only creates users and groups if they do not exist. It does not drop users
and groups. Permission restoration is also nondestructive, that is, the system only
grants permissions. It does not revoke permissions.
Remember: When you restore data and users from a backup, the process reverts
your system to a point in the past when the backup was made. Your user
community and their access rights might change, or if you are restoring to a new
system, a stale backup might not reflect your current user community. After you
make any significant user community changes, back up the latest changes. After
you restore from a backup, check that the resulting users, groups, and permissions
match your current community permissions.
To use the restore command, you must have the Restore privilege. The nzrestore
command restores complete databases or specific tables.
If you need to grant a user permission to restore a specific database (versus global
restore permissions), you can create an empty database and grant the user
privilege for that database. The user can then restore that database.
You can pass parameters to the nzrestore command directly on the command line,
or you can set parameters as part of your environment. For example, you can set
the NZ_USER or NZ_PASSWORD environment variables instead of specifying -u or -pw
on the command line.
When you do a full restore into a database, the nzrestore command performs the
following actions:
1. Verifies the user name that is given for backup and restore privileges.
2. Checks to see whether the database exists.
3. Re-creates the same schema for the new database, including all objects such as
tables, views, sequences, and synonyms.
4. Applies any access privileges to the database and its objects as stored in the
backup. If necessary, the command creates any users or groups that might not
currently exist on the system to apply the privileges as saved in the database
If you are performing a table-level restore and the table exists in the database, the
nzrestore command drops and re-creates the table if you specify -droptables. If
you do not specify -droptables, the restore fails.
The nzrestore -noData command does not restore the /nz/data directory; instead,
it creates a database or populates an empty database with the schema (definition)
from the backed-up database. The command creates the objects in the database,
such as the tables, synonyms, sequences, and views, and applies any access
permissions as defined in the database. It does not restore data to the user tables in
the database; the restored tables are empty.
In rare cases, when a schema (definition) has a large number of objects, the restore
could fail with a memory limitation. In such cases, you might adjust how you
restore your database. For example, if you attempt to restore a database that
includes many columns (such as 520,000), you would likely receive an error
message that indicates a memory limitation. The memory limitation error can
result from a large number of columns or other object definitions. You would likely
perform a no-data restore followed by two or more table-level restore operations.
Related tasks:
“Specifying restore privileges” on page 13-28
Related reference:
“The nzrestore command” on page A-47
Use the nzrestore command to restore your database from a backup.
If you specify this option, you cannot specify -globals. For more
information, see “Back up and restore users, groups, and
permissions” on page 13-21.
If you saved the backup to multiple file system locations, specify the
roots of all the locations in this argument. For example, if a backup
was written to /home/backup1, /home/backup2, and /home/backup3,
you can restore the data in a single operation by specifying all three
locations.
-dirfile Specifies a file with a list of the backup source directories, one per
line.
-connector conname Names the connector to which you are sending the backup. Valid
values are:
v filesystem
v tsm
v netbackup
v networker
If you have both the 32-bit and 64-bit clients installed for either TSM,
NetBackup or Networker, the system defaults to using the 32-bit
client. You can force the system to use the 64-bit client by specifying
the name (tsm6-64, netbackup7-64, or networker7-64) in the
-connector option. If only the 64-bit client is installed, the system
uses the 64-bit client to perform the restore.
After you run a partial restore, you can specify NEXT to restore the
next increment from the backup set. If you specify REST, the
command restores the remaining increments from the backup set.
You can specify a stream count for the command using the -stream N
option.
If you do not specify a -streams value, the restore process uses the
number of streams defined by the host.bnrRestoreStreamsDefault
registry setting.
-noData Restores only the database schema (the definitions of objects and
access permissions), but not the data in the restored tables.
For file system backup locations, you must also specify -dir for the
location of the backup archive and -db for a specific database.
-history Prints a restore history report.
-incrementlist Prints a report of the available backup sets and increments.
With the -extract option, the restore command does not restore the
specified backup set or files. The -extract option causes the
command to skip the restore operation and output the requested file
or list.
-extractTo path Specifies the name of a file or a directory where you want to save the
extracted output. If you do not specify directory, the -extract option
saves the file in the current directory where you ran the nzrestore
command.
The Restore privilege operates at the database level. You can grant a global Restore
privilege for the user to restore any database, or you can grant Restore privilege to
allow the user to restore only a specific database.
Results
The restored database is owned by the original creator. If that user no longer exists,
the system displays a warning and changes the ownership to the admin user.
Related concepts:
“The nzrestore command” on page 13-22
You can use the nzrestore command to restore the contents of a database.
Procedure
1. Run nzsql and connect to the database you want to allow the user to restore by
entering: nzsql db1
2. Create a user user_restore with password password. For example:
DB1.SCHEMA(ADMIN)=> CREATE USER user_restore WITH PASSWORD ’password’;
3. Grant restore privilege to user_restore. For example:
DB1.SCHEMA(ADMIN)=> GRANT RESTORE TO user_restore;
If the database does not exist, you must first create an empty database and then
assign the Restore privilege to the user. You must assign the Restore privilege
on an empty database.
Procedure
1. Run nzsql and connect to the system database by entering: nzsql system;
2. Create a user user_restore with password password. For example:
SYSTEM.ADMIN(ADMIN)=> CREATE USER user_restore WITH PASSWORD ’password’;
3. Grant restore privilege to user_restore. For example:
Restore tables
You can use the nzrestore command to identify specific tables in an existing
backup archive and restore only those tables to the target database.
Table-level restore
When you request a table-level restore, the system restores individual tables from
an existing full-database backup to a specific database. Table-level restore does not
drop the target database or affect objects in that database other than those objects
that you are explicitly restoring.
Note: If you specify multiple tables with the -tables option, separate the table
names with spaces.
As in a standard restore, by default the system restores the table's definition and
data. To skip the data restore, use the -noData option.
Incremental restoration
You can restore an entire backup set in a single operation. This type of restore is
called a full restore. You can also restore a subset of your backups by using an
incremental or partial restore. The granularity depends on the backup increment
(full, differential, or cumulative) that corresponds to the point in time to which you
want to return. For incremental restores, you must apply the increments in
sequence.
When you restore data, the restoration software reads the metadata to frame the
increment, validates the operation against the backup set, and performs the restore.
The restore software associates the increment with a backup set on either the
source IBM Netezza or the target Storage Management System (SMS) by finding
(by default) the most recent backup of the same database from the same source
system.
For example, the following command line restores the database dev from the
backup set that is stored in a NetBackup system.
nzrestore -db dev -connector netbackup
You can override the default host and database. For example, to specify another
host use the -npshost option (where -npshost is the source IBM Netezza system
that created the backup), and to specify another database, specify the -sourcedb
option.
nzrestore -db dev -connector netbackup -npshost nps_dev -sourcedb
mydev
If you do not want to restore the most recent backup set, you can specify a specific
backup set with the -backupset option.
nzrestore -db dev -connector netbackup -backupset 20060623200000
Note: Use the -incrementlist option to view a report that lists all full and
incremental backups.
For the restore to return a database to a known state, the database must not be
allowed to change during multi-step restore operations. Specifying the -lockdb
option makes the database read-only and allows subsequent restore operations to
the database.
To restore another increment after you perform a restore, you must specify -lockdb
before an append restore operation. You cannot do an append restore operation
unless the database was locked in a previous restore operation.
Up-to-x restore
Up-to-x restore restores a database from a full backup and then up to the specified
increment. You can follow the up-to-x restore with a step-by-step restore.
Issue the -incrementlist option to view a report that lists increment numbers.
For example, the following command restores the full backup of database dev and
then up to increment 4.
nzrestore -db dev -connector netbackup -increment 4
Step-by-step restore
Remember: Lock the database with the first nzrestore command and to unlock it
with the last.
For example, the following command line restores the full backup and then up to a
specific incremental of the database dev, and then steps through the following
incrementals.
nzrestore -db dev -connector netbackup -increment 4 -lockdb true
nzrestore -db dev -connector netbackup -increment Next -lockdb true
nzrestore -db dev -connector netbackup -increment Next -lockdb false
To begin with the first increment when the database does not yet exist, specify the
-increment 1 option. You can then step through the increments by specifying
-increment Next.
Remainder restore
A remainder restore restores all the remaining increments from a backup set that
are not yet restored. For example, after you restore to an increment ID (and
possibly some step restores), the following command restores any remaining
increments in the backup set.
nzrestore -db dev -connector netbackup -increment REST
The following example displays the restore history for a specific database:
nzrestore -history
Note: You can further refine your results by using the -db and -connector options,
or use the -v option for more information. You use the -db option to see only the
history of a specified database.
You install the 32-bit or 64-bit NetBackup Client for Linux software on the Netezza
host and the Netezza components communicate with the NetBackup client
software to perform backup and restore operations. The Netezza components do
not assume any specific media, but instead rely on the NetBackup Media Server
for configuration of the media server and storage devices. The Netezza solution
has been tested with NetBackup versions 6.5, 7.1, and 7.6.
If you plan to use multi-stream backup and multi-stream restore support, consult
with your NetBackup administrator to confirm that it is configured to support
your expected maximum stream count. NetBackup has multiple settings for
limiting concurrent stream counts. On the Master Server, there is a Maximum jobs
per client global attribute. There is also a Policy-level attribute, Limit jobs per
policy. The job limit is the minimum of the global setting and the policy setting.
There is also a Maximum concurrent jobs setting on disk storage units. See the
NetBackup documentation for more information.
Related concepts:
“Third-party backup and recovery solutions support” on page 13-8
You can use the nzbackup and nzrestore commands to save data to and restore
data from network-accessible file systems, which is the default behavior for the
backup and restore commands. You can also use these commands with supported
third-party backup and recovery products.
Results
To back up and restore from NetBackup, you must configure the NetBackup
environment. Complete the following steps.
Procedure
1. Make the IBM Netezza host network-accessible to the NetBackup Master
Server.
2. Confirm that at least one NetBackup Media Server and storage device is
connected to NetBackup, is operational, and is available to NetBackup policies.
3. Install the NetBackup Client for Linux on the Netezza host.
4. Create a NetBackup policy for Netezza backup.
NetBackup policy
A NetBackup policy contains the configuration settings for an IBM Netezza
database backup. It defines the rules that NetBackup uses when it backs up clients.
You use the NetBackup Administration Console to configure a NetBackup policy.
For a Netezza database backup, the NetBackup policy is a “DataStore” policy. The
following table describes the relevant policy settings.
Table 13-12. NetBackup policy settings
Category Setting Value
Attributes Policy Type DataStore
Attributes Storage Unit Previously configured NetBackup Storage Unit,
suitable for the Netezza database backup destination.
Attributes Keyword Phrase Optional user-supplied keyword phrase. Can be used
to help distinguish between backups in the Client
Backups Report in NetBackup. You might use the
database name.
Schedule Type of Backup Automatic for NetBackup-scheduled backups
The script file consists of an nzbackup command line with the appropriate
arguments. Each scheduled, automated backup operation with a distinct nzbackup
command line must have its own policy.
For each database, your system should have a separate policy and script file for
each backup type. For one database, you can have three policy and script files,
representing the full, differential, and cumulative backup types. If you had three
databases, you would have nine policy and script files, plus possibly one for the
-globals option you specified.
The following procedures describe in general how to use the UIs. The commands
and menus can change with updates or patches to the backup software; these
procedures are intended as a general overview.
Related concepts:
“Procedures for backing up and restoring by using Veritas NetBackup” on page
13-40
The procedures in this section describe how to perform backups and restores by
using the Veritas NetBackup utilities.
Procedure
1. Obtain the following name information for your Netezza system:
v If your system is an HA system, ask your network administrator for your
floating IP address.
v If your system is a standard (non-HA) system, ask for the external DNS
name for the Netezza host.
Note: The timeout settings are important. If a database restore fails with the
error:
Connector exited with error: ’ERROR: NetBackup getObject() failed with
errorcode (-1): Server Status: Communication with the server has not been
initiated or the server status has not been retrieved from the server
The problem can be that the CLIENT_READ_TIMEOUT set on the NetBackup server
expired before the restore finished. This error can occur when you are restoring
a database that contains many tables with small changes, such as frequent
incremental backups, or a database that contains many objects such as UDXs,
views, or tables. If your restore fails with this error, you can increase the
CLIENT_READ_TIMEOUT value on the NetBackup server, or you can take steps to
avoid the problem by specifying certain options when you create the database
backup. For example, when you create the database backup, you can specify a
multi-stream backup by using the nzbackup -streams num option, or you can
reduce the number of files that are committed in a single transaction by using
the nzbackup -connectorArgs "NBC_COMMIT_OBJECT_COUNT=n" option, or both, to
avoid the timeout error. This error message might display for other reasons, so
if this workaround does not resolve the issue, contact Netezza Support for
assistance.
6. Make sure that the backups done by one host are visible to another host. If you
have a Netezza HA environment, for example, the backups performed by Host
1 should be visible to Host 2.
There are many ways that you can make the backups from one host visible to
another. See the Veritas NetBackup Administrators Guide, Volume I for UNIX and
Linux, and specifically to the section on managing client restores. Two possible
methods follow:
v You can open access to all hosts by updating the timestamp on the following
file on the NetBackup Master Server.
touch /usr/openv/netbackup/db/altnames/No.Restrictions
If the touch command fails, make sure that the altnames directory exists. If
necessary, create the altnames directory and rerun the command.
v You can give Host1 access to all backups created by Host2 and vice versa. To
do this, update the timestamp on two files:
touch /usr/openv/netbackup/db/altnames/host1
touch /usr/openv/netbackup/db/altnames/host2
For example, if the names of your HA hosts are nps10200-ha1 and
nps10200-ha2 then you would create the following files:
touch /usr/openv/netbackup/db/altnames/nps10200-ha1
touch /usr/openv/netbackup/db/altnames/nps10200-ha2
Procedure
1. Start the NetBackup Administration Console.
2. In the right pane, select Create a Backup Policy. The Backup Policy
Configuration Wizard starts.
3. Click Next in the Welcome dialog.
4. In the Policy Name and Type dialog, type a policy name, then select
DataStore from the policy type list, and then click Next.
5. In the Client List dialog, complete the following steps.
a. Click Add.
b. Enter the floating IP address or the external DNS name (the value you
previously entered for HOSTNAME in the file backupHostname.txt) in the
Name field.
c. Click Next.
6. In the window, click the menu and select the RedHat Linux operating system
that is running on your IBM Netezza host. Most Netezza systems use Intel,
RedHat 2.4, but if you are not sure, run the uname -r command on your
Netezza host to display the kernel release number.
7. Click OK and then click Next in the Client List dialog.
The operating systems in the list are based on the client binaries that are
installed on the NetBackup master server. If you do not install the correct
client software on the NetBackup server, the list might be empty or might not
include the Red Hat Linux client software. For more information about
installing the client software, see the Veritas NetBackup Installation Guide.
8. In the Backup Type dialog, select Automatic Backup to enable it, then click
Next.
Restriction: Do not specify values for the full path script. You supply this
information in a later step.
9. In the Rotation dialog, select your time slot rotation for backups and how long
to retain the backups, then click Next.
10. In the Start Window dialog, select the time options for the backup schedule
and click Next. A dialog opens and prompts you to save or cancel the backup
policy that you created.
11. Click Finish to save the backup policy.
After you create a backup policy, complete the following steps to initiate an
automatic backup from the NetBackup Administration Console.
Rather than specify the -connectorArgs argument, you can set the environment
variables DATASTORE_SERVER and DATASTORE_POLICY. If you set the environment
variables, and then use the command-line argument -connectorArgs, the
command-line argument takes precedence.
Redirect a restore
Typically, you restore a backup to the same IBM Netezza host from which it was
created. If you want to restore a backup that is created on a different Netezza host:
v Configure Veritas NetBackup for a “redirected restore.” For more information,
see the Veritas NetBackup documentation.
v Use the -npshost option of the nzrestore command to identify the Netezza host
from which the backup was created. The following is sample syntax.
nzrestore -db dbname -connector netbackup -connectorArgs
"DATASTORE_SERVER=NetBackup_master_server" -npshost origin_nps
Related tasks:
“Preparing your system for integration” on page 13-36
NetBackup troubleshooting
The Activity Monitor in the NetBackup Administration Console shows the status of
all backups and restores. If the monitor shows that a backup or restore failed, you
can double-click the failed entry to obtain more information about the problems
that caused the failure.
The nzhostbackup command creates a single file that is written to the local disk or
other storage accessible from the IBM Netezza host. You can send this file to
NetBackup by using the bpbackup command-line utility, which is included with the
NetBackup client software installed on the Netezza host. You can later transfer the
file back to its original location by using the bprestore command-line utility (also
a part of the NetBackup client software). You can then restore the file by using the
nzhostrestore command.
Procedure
1. Create a NetBackup policy of type Standard.
2. Edit the policy schedule to match your intentions for backup.
Results
Keep in mind the following important points for the bpbackup utility and the
example:
v Specify the explicit path to the bpbackup command if it is not part of your
account's PATH setting. The default location for the utility is
/usr/openv/netbackup/bin.
v In the sample command, the -L option specifies the log file where the status of
the backup operation is written. Review the file because the utility does not
return error messages to the console.
v The -w option causes the bpbackup utility to run synchronously; it does not
return until the operation completes.
v The -p option specifies the name of the NetBackup policy, which you defined in
step 1 on page 13-40.
v You can display syntax for the bpbackup utility by running bpbackup without
options.
Perform a restore
Run the bprestore NetBackup utility to restore the host backup file. The following
is an example.
bprestore -p nzhostbackup -w -L /nz/tmp/hostrestore.log
/nz/tmp/hostbackup.20070521
Keep in mind the following important points for the bprestore utility and the
example:
v Specify the explicit path to the bprestore command if it is not part of your
account PATH setting. The default location for the utility is /usr/openv/
netbackup/bin.
v You can display syntax for the bprestore utility by running bprestore without
options. See the Veritas NetBackup Commands Reference Guide.
Keep in mind the following important points for the sample script:
v The bpbackup utility references the nzhostbackup policy, which is a NetBackup
policy of type Standard. The policy includes a schedule that allows for a
user-directed backup during the specified time period, and lists the Netezza host
as a client.
v To run the script, you create a NetBackup policy of type DataStore. You can set
this policy to have an automatic schedule for regular host backups. You set the
frequency and time period for the backups. Ensure that the policy lists the script
file in Backup Selections. The script file reference must include the full path to
the backup file as you would reference it on the Netezza host.
v Because the script runs as root on the Netezza host, the Netezza user must be
set inside the script by using the NZ_USER variable. The user's password must be
cached by using the nzpassword utility.
Throughout these topics, the names IBM Spectrum Protect and Tivoli Storage
Manager (or TSM) are both used to refer to IBM's data protection platform of
solutions.
If you plan to use multi-stream backup and multi-stream restore support, consult
with your Tivoli Storage Manager administrator to confirm that it is configured to
support your expected maximum stream count. For Tivoli Storage Manager
backups, the maximum number of streams is controlled by the MAXSESSIONS option
in the Tivoli Storage Manager Admin console (dsmadmc):
v Display the value by using query option MAXSESSIONS.
13-42 IBM Netezza System Administrator’s Guide
v Set the value by using setopt MAXSESSIONS value.
If you specify more streams than allowed by the MAXSESSIONS value, the Tivoli
Storage Manager server displays the following error message and the backup ends:
Error: Connector init failed: 'ANS1351E (RC51) Session rejected: All server
sessions are currently in use
Related concepts:
“Third-party backup and recovery solutions support” on page 13-8
You can use the nzbackup and nzrestore commands to save data to and restore
data from network-accessible file systems, which is the default behavior for the
backup and restore commands. You can also use these commands with supported
third-party backup and recovery products.
This guide does not provide details about the operation or administration of the
Tivoli Storage Manager server or its commands. For information about the Tivoli
Storage Manager operation and procedures, see your Tivoli Storage Manager user
documentation.
To configure encrypted backups, you must specify some settings to the TSM
configuration files in the backup archive and API clients. For each TSM server in
your environment, you can specify that the TSM backup connector use encrypted
backups to store the files sent by the Netezza backup utilities.
This section describes the procedures to make an IBM Netezza host a client to a
Tivoli Storage Manager server. The following procedure is overall process for
configuring a Netezza host.
Procedure
1. Prepare your system for the Tivoli Storage Manager integration.
2. Install the Tivoli Storage Manager client software on the Netezza host.
3. Set up the client configuration files.
To prepare an IBM Netezza system for integration, complete the following steps.
You can install the 32-bit or 64-bit Tivoli Storage Manager client software on your
Netezza host system to enable the integration. You can obtain the Tivoli Storage
Manager client software from IBM. If you have an HA Netezza system, repeat
these installation steps on both Host 1 and on Host 2.
To install the Tivoli Storage Manager client software on the Netezza system,
complete the following steps.
Procedure
1. Log in to the Netezza system as the root user.
2. Place the Tivoli Storage Manager UNIX client disk in the drive.
3. Mount the CD/DVD by using either of the following commands:
v mount /media/cdrom
v mount /media/cdrecorder
If you are not sure which command to use, run the ls /media command to
display the path name (cdrom or cdrecorder) to use.
4. To change to the mount point, use the cd command and specify the mount path
name that you used in step 3. This guide uses the term /mountPoint to refer to
the applicable disk mount point location on your system, as used in step 3.
cd /mountPoint
5. Change to the directory where the packages are stored, for example:
cd /mountPoint/tsmcli/linux86
6. Enter the following commands to install the 32-bit Tivoli Storage Manager
ADSM API and the Tivoli Storage Manager Backup-Archive (BA) client. The BA
client is optional, but it is recommended because it provides helpful features
such as the ability to cache passwords for Tivoli Storage Manager access and
also to create scheduled commands.
rpm -i TIVsm-API.i386.rpm
rpm -i TIVsm-BA.i386.rpm
Make sure that you use the default installation directories for the clients (which are
usually /opt/tivoli/tsm/client/api and /opt/tivoli/tsm/client/ba). After the
installation completes, proceed to the next section to configure the Netezza as a
client.
Follow these steps to set up the IBM Spectrum Protect (formerly Tivoli Storage
Manager) configuration files, which make the Netezza system a client of the Tivoli
Storage Manager server. If you have an HA Netezza system, make sure that you
repeat these configuration steps on both Host 1 and on Host 2.
There are two sets of configuration files, one set for the API client and one set for
the B/A client. Follow these steps to configure the files for the API client. If you
also installed the B/A client RPM, make sure that the changes that you make to
the API client files are identical to the changes that you make for the BA client files
which are in the /opt/tivoli/tsm/client/ba/bin directory. You can either repeat
these steps for the B/A client files or copy the updated API client dsm.opt and
dsm.sys files to the B/A bin directory.
Procedure
1. Make sure that you are logged in to the Netezza system as root.
2. Change to the following directory:
cd /opt/tivoli/tsm/client/api/bin
3. Copy the file dsm.opt.smp to dsm.opt. Save the copy in the current directory.
For example:
cp dsm.opt.smp dsm.opt
4. Edit the dsm.opt file by using any text editor. In the dsm.opt file, proceed to
the end of the file and add the following line, which is shown in bold, where
server is the host name of the Tivoli Storage Manager server in your
environment:
******************************************************************
* IBM Tivoli Storage Manager *
* *
* Sample Client User Options file for UNIX (dsm.opt.smp) *
******************************************************************
* This file contains an option you can use to specify the TSM
* server to contact if more than one is defined in your client
* system options file (dsm.sys). Copy dsm.opt.smp to dsm.opt.
* If you enter a server name for the option below, remove the
* leading asterisk (*).
******************************************************************
* SErvername A server name defined in the dsm.sys file
SErvername server
If you have multiple Tivoli Storage Manager servers in your environment, you
can add a definition for each server. However, only one definition can be the
active definition. Any additional definitions should be commented out by
using the asterisk (*) character. The active dsm.opt entry determines which
Tivoli Storage Manager server is used by the Tivoli Storage Manager
connector for backup and restore operations. If there are multiple
uncommented SERVERNAME entries in dsm.opt, the first uncommented entry
is used.
******************************************************************
SErvername server
COMMMethod TCPip
TCPPort 1500
TCPServeraddress serverIp
NODENAME client_NPS
For the NODENAME value, use the naming convention client_NPS, where
client is the host name of the Netezza host, to help uniquely identify the client
node for the Netezza host system.
If you plan to use Tivoli encryption for your backups, add the following
settings to the server definition. For the ENCRYPTIONTYPE value, use the
encryption value specific to your Tivoli configuration (AES128 is an example).
For the include.encrypt setting, you can specify a setting such as /.../* to
encrypt all of the TSM Netezza backups, or you can specify a specific Netezza
backup object and backup type. For example, /FS1/MY_DB/FULL indicates that
the TSM server should encrypt the full backups of the MY_DB database stored
in the /FS1 location. For details about the settings, see the Tivoli
documentation.
ENCRYPTKEY GENERATE
ENCRYPTIONTYPE AES128
include.encrypt /.../*
If you have multiple Tivoli Storage Manager servers in your environment, you
can create another set of these definitions and append each set to the file. For
example:
SErvername server1
COMMMethod TCPip
TCPPort 1500
SErvername server2
COMMMethod TCPip
TCPPort 1500
TCPServeraddress server2Ip
NODENAME client_NPS
If you specify more than one Tivoli Storage Manager server definition in the
dsm.sys file, you can create corresponding definitions in the dsm.opt file as
described in step 4 on page 13-45.
8. If you installed the Tivoli 5.4 client software on your hosts, you must also add
the following options in the dsm.sys file.
ENCRYPTIONTYPE DES56
PASSWORDACCESS prompt
Verify that there are no other uncommented lines for the ENCRYPTIONTYPE
and PASSWORDACCESS options.
The PASSWORDACCESS prompt option disables automatic, passwordless Tivoli
Storage Manager authentication. Each operation that uses the Tivoli Storage
Manager connector requires that you to enter a password. You can supply the
password in the nzbackup and nzrestore connectorArgs option as
TSM_PASSWORD=password or you can set TSM_PASSWORD as an environment
variable.
9. Save and close the dsm.sys file.
10. If you installed the BA client kit, do one of the following steps:
v Change to the /opt/tivoli/tsm/client/ba/bin directory and repeat steps 3
on page 13-45 through 9 to configure the BA client dsm.opt and dsm.sys
files. Make sure that the changes that you make to the API client files are
identical to the changes that you make for the BA client files.
v Copy the dsm.opt and dsm.sys files from the /opt/tivoli/tsm/client/api/
bin directory to the /opt/tivoli/tsm/client/ba/bin directory.
If you encounter this error, review and increase the TXNGROUPMAX setting to a
value that is larger than the maximum number of objects that a single backup
operation will try to create. For example, if you are performing incremental
backups, then use a value that is at least twice the table count. Also, add a small
number (such as five) of additional objects for backup metadata files. If your
database has UDXs, add an additional two objects for each UDX. If you are using
multi-stream backups, then use the maximum value of either double the UDXs, or
double the tables divided by the stream count, and add the additional five objects
for metadata objects.
To set the TXNGROUPMAX value by using the GUI, go to the Policy Domains
and Client Nodes > Your client node > Advanced Settings > Maximum size of a
transaction. The options are Use server default or Specify a number (4-65,000). Be
sure to repeat this process on each node (Host 1 and Host 2), and to use the same
setting for each node. If you choose Specify a number, the setting cannot be
Caching a password
About this task
Optionally, you can use the IBM Spectrum Protect (formerly Tivoli Storage
Manager) connector to cache user passwords on their client system. If you cache
the password, you do not have to specify the password for commands to the Tivoli
Storage Manager connector (as described in “The nzbackup and nzrestore
commands with the Tivoli Storage Manager connector” on page 13-57).
Restriction: If you use the Tivoli Storage Manager 5.4 client, you cannot use the
cached password support. You must use PASSWORDACCESS prompt for the connector
to work correctly.
If you have an HA Netezza system, make sure that you repeat these steps on Host
1 and on Host 2.
Procedure
1. Change to the following directory:
cd /opt/tivoli/tsm/client/ba/bin
2. Edit the dsm.sys file by using any text editor and add the following line:
PASSWORDACCESS generate
Review the file to make sure that there are no other lines that contain the
PASSWORDACCESS parameter. If there are lines, comment them out.
3. Save and close the dsm.sys file.
4. As a test, log in as root and run the dsmc query session command to be
prompted for the client password.
5. Repeat steps 1 through 3 to edit the /opt/tivoli/tsm/client/api/bin/dsm.sys
API client file. This allows nzbackup to run by using the Tivoli Storage Manager
connector without specifying the Tivoli Storage Manager password.
Results
After the client authentication is successful, subsequent logins will not prompt for
a password until the password changes at the Tivoli Storage Manager server.
The tsm_server value is the host name or IP address of the Tivoli Storage Manager
server. Log in by using an account that is created for the Tivoli Storage Manager
server.
If you cannot access the web interface, the interface might be stopped on the
server. For more information about accessing the Tivoli Storage Manager server by
using a command shell or SSH session and starting the TSM server or ISC Console,
see your Tivoli Storage Manager documentation.
The following procedures assume that you use a web browser to connect to the
ISC Console and are logged in successfully.
The instructions are specific to Tivoli Storage Manager 5. The steps are similar for
Tivoli Storage Manager 6, but there might be minor changes in the names of
menus and dialogs in the later release.
To create a storage pool on the IBM Spectrum Protect (formerly Tivoli Storage
Manager) server, complete the following steps.
To create a policy domain on the IBM Spectrum Protect (formerly Tivoli Storage
Manager) server, complete the following steps.
Procedure
1. In the left navigation frame of the ISC Console, click Tivoli Storage Manager.
2. Click Policy Domains and Client Nodes.
3. Select the Tivoli Storage Manager server from which you will manage your
Netezza systems, and then select View Policy Domains from the Select Action
list.
4. Select Create a Policy Domain from the Select Action list.
5. Type a name for the new policy domain and click Next.
6. Select a storage pool for backup data from the list, such as the one you created
in “Creating a storage pool” on page 13-49, then click Next.
7. In the Assign Client Nodes Now page, the application prompts you to assign
client nodes at this time. If you already registered the client node/nodes on
your Tivoli Storage Manager server, select Yes and click Next to proceed. (The
Assign Client Nodes page opens where you can list and select client nodes to
add to the domain.) Otherwise, select No and click Next to proceed. A
Summary window opens to display messages about the successful creation of
the policy domain and its information.
8. Click Finish.
To register an IBM Netezza system on the Tivoli Storage Manager server, you
create a client node that represents the Netezza host. For an HA Netezza system,
you must create two client nodes, one for Host1 and one for Host2; complete these
steps for Host 1, then repeat them for Host 2.
Procedure
1. In the left navigation frame of the ISC Console, click Tivoli Storage Manager.
2. Click Policy Domains and Client Nodes.
3. Select the Tivoli Storage Manager server from which you will be managing
your Netezza systems, and then select View Policy Domains from the Select
Action list.
4. Select a policy domain and select Modify Policy Domain from the Select
Action list.
5. Click the arrow to the right of Client Nodes to expand the client nodes list.
6. Select Create a client node from the Select Action list.
7. Type a name for the Netezza host. The name must match the name that is
specified in the dsm.sys file on the client system.
8. Enter and confirm a password for client authentication, and choose an
expiration for the password. Click Next to continue.
9. You can either select Create administrator and assign it owner authority to
node or Assign owner authority to selected administrator, then click Next.
The administrator of a node can use the owner authority to perform
administrative tasks such as changing the client properties. A Summary
window opens to display messages about the successful creation of the client
node.
10. Click Finish. The newly created client node now displays in the Client Nodes
list.
11. Select the newly created client node and select Modify Client Node from the
Select Action list.
12. Click the Communications tab on the left.
13. Type in the TCP address and port in the fields. You can specify the host name
of the client (the Netezza system host name) and any unused port value (for
example, 9000). To list the used ports on the system, use the netstat
command.
14. Click the Advanced Settings tab on the left.
15. Change the maximum size of transaction value to a value such as 4096. The
maximum number of objects in a transaction cannot exceed 65000. Use caution
when you are selecting a maximum for the number of objects per transaction;
larger numbers can impact performance. Try to estimate the maximum
number of objects in the database and set the value accordingly. You could
begin with an estimate of three times the number of tables in the database.
16. Click OK to save the settings.
You must create a proxy node on the IBM Spectrum Protect (formerly Tivoli
Storage Manager) server for each Netezza system (HA or standard), and then grant
the client node for the Netezza system proxy authority over the proxy node. The
client node can use proxy authority to use the proxy node to represent itself; that
is, the proxy node can represent the client node.
Results
The newly created client node now displays in the Client Nodes list.
To grant the client node or nodes proxy authority over the new proxy node,
complete the following steps.
Procedure
1. In the left navigation frame of the ISC Console, click Tivoli Storage Manager.
2. Click Policy Domains and Client Nodes.
3. Select the Tivoli Storage Manager server from which you will be managing
your Netezza systems, and then select View Policy Domains from the Select
Action list.
4. Select a policy domain and select Modify Policy Domain from the Select
Action list.
5. Click the arrow to the right of Client Nodes to expand the client nodes list.
6. Select the proxy node and then select Modify Client Node from the Select
Action list.
7. Select the Proxy Authority tab on the left, and then select Grant Proxy
Authority from the Select Action list on the right.
8. Select the client node that represents the Netezza host. If the Netezza system is
an HA system, select both client nodes.
9. Click OK to complete the proxy assignment.
When you perform a full backup of a database to IBM Spectrum Protect (formerly
Tivoli Storage Manager) by using nzbackup, any previous backups of that database
are marked as inactive. The Tivoli Storage Manager server configuration settings
specify how it manages inactive files. The Tivoli Storage Manager default is to
make all inactive files immediately unavailable. If you want the ability to restore
from backup sets other than the most recent, review and adjust the Tivoli Storage
Manager configuration setting for Number of days to keep inactive versions. The
default is zero, which means that only the latest backup set is available for use in
restores.
Procedure
1. In the left navigation frame of the ISC Console, click Tivoli Storage Manager.
2. Click Policy Domains and Client Nodes. In Tivoli Storage Manager 6, this
menu is Policy Domains.
3. Select the Tivoli Storage Manager server from which you will be managing
your Netezza systems.
4. Select the policy domain which has the Netezza host machine as a client node.
5. In the Properties section, select the Management Class which governs the
Netezza host client node.
6. In the left pane of the Class Properties page, select Backup settings.
7. Review the Number of days to keep inactive versions field to specify how
long you want to keep inactive (that is, older than the latest) backup sets.
Set this value to a positive number to keep and use inactive backup sets for
that period of days. For more information about the range of values and
possible impacts to the Tivoli Storage Manager server, see the Tivoli Storage
Manager documentation.
Redirect a restore
Typically, you restore a backup to the same IBM Netezza host from that it was
created. If you want to restore a backup that was created on one Netezza host to a
different Netezza host, you must adjust the proxy settings.
For example, assume that you have a Netezza host named NPSA, for which you
defined a client node named “NPSA NPS” and a proxy node named NPSA on the
Tivoli Storage Manager server. Assume also that there is a backup file for the
NPSA host on the Tivoli Storage Manager server.
If you want to load the backup file onto a different Netezza host named NPSB,
then you must first ensure that NPSB is registered as a client to the Tivoli Storage
Manager server. Assume that there is a client node for “NPSB NPS” and a proxy
node named NPSB for this second host.
To redirect the restore file from NPSA to NPSB, you must grant the client node
“NPSB NPS” proxy authority over the proxy node NPSA. After you grant the
proxy authority to “NPSB NPS”, you are able to restore the backup for NPSA to
the NPSB host by using a command similar to the following command:
nzrestore -db database -connector tivoli -npshost NPSA
The server does not have enough recovery log space to continue the
current operation
The server does not have enough database space to continue the current
operation
There are some configuration settings changes that can help to avoid these errors
and complete the backups for large databases. These configuration settings depend
on factors such as network speed, Tivoli Storage Manager server load, network
load, and other factors. The following values are conservative estimates that are
based on testing, but the values for your environment can be different. If you
encounter errors such as timeouts and space limitations, try these conservative
values and adjust them to find the right balance for your server and environment.
v COMMTIMEOUT
Specifies the time in seconds that the Tivoli Storage Manager server waits for an
expected client response. The default is 60 seconds. You can obtain the current
value of the setting by using the QUERY OPTION COMMTIMEOUT command.
For large databases, consider increasing the value to 3600, 5400, or 7200 seconds
to avoid timeout errors, which can occur if the complete transfer of a database
does not complete within the time limit:
SETOPT COMMTIMEOUT 3600
v IDLETIMEOUT
Specifies the time in minutes that a client session can be idle before the Tivoli
Storage Manager server cancels the session. The default is 15 minutes. You can
obtain the current value of the setting by using the QUERY OPTION
IDLETIMEOUT command. For large databases, consider setting the value to 60
minutes:
SETOPT IDLETIMEOUT 60
v The default size of the Tivoli Storage Manager server database, 16 MB, might be
inadequate for large Netezza databases. Depending on the size of your largest
Netezza database, you can increase the default Tivoli Storage Manager database
size to a value such as 500 MB.
v The size of the recovery log might be inadequate for large Netezza databases or
those databases that have many objects (tables, UDXs). An increased value such
as 6 GB might be more appropriate. The recovery log should be at least twice
the size in GB as your largest table in TB. For example, if your largest table is 2
TB, the recovery log must be at least 4 GB. In addition, you might need a larger
log file if you run multiple concurrent backup jobs on the same Tivoli Storage
Manager server, such as several Netezza backups or a combination of Netezza
and other backups within the enterprise.
Procedure
1. In the left navigation frame of the ISC Console, click Tivoli Storage Manager.
2. Click Server Maintenance.
3. Select the Tivoli Storage Manager server for which your Netezza system is a
client.
4. On the Select Action list, select Server Properties.
5. Select the Database and Log tab from the left navigation frame of the Server
Properties area.
6. In the Database area on the Select Action list, select Add Volume.
7. In the Volume name field, type the absolute path of the new volume that you
want to create.
8. In the New volume size field, type an appropriate size for the new volume. If
you are not sure, use the value 500.
9. To use the new volume immediately, select When adding the new volume,
expand the database capacity by and in the field, enter a value that is smaller
than the value specified for the New volume size field.
10. Click OK to create the volume.
To add space automatically with a space trigger, complete the following steps.
Procedure
1. Repeat steps 1 through 5 of the previous procedure to display the Database
and Log area.
2. In the Database area on the Select Action list, select Create Space Trigger.
3. Specify values for the field for the automatic database expansion trigger. Use
the online help to obtain details about the operation of each field and setting.
The key fields to set for your environment are the Begin expansion at this
percentage of capacity, Expand the database by this amount, and Maximum
size fields.
4. Click OK to create the trigger.
Results
You can also create a database space trigger by using the define spacetrigger db
command. For example, the following command creates a trigger that increases the
size of database by 25% when it hits 85% of its capacity with no limit on
maximum size:
define spacetrigger db fullpct=85 spaceexpansion=25 maximumsize=0
To manually add space to the log volume, complete the following steps.
Procedure
1. Repeat steps 1 on page 13-55 through 5 on page 13-55 of the previous
procedure to get to the Database and Log area.
2. In the Log area on the Select Action list, select Add Volume.
3. In the Volume name field, type the absolute path of the new volume that you
want to create.
4. In the New volume size field, type an appropriate size for the new volume. If
you are not sure, use the value 500.
5. To use the new volume immediately, select When adding the new volume,
expand the recovery log capacity by and in the field, enter a value that is
smaller than the value specified for the New volume size field.
6. Click OK to create the volume.
To automatically add space with a space trigger, complete the following steps.
Procedure
1. Repeat steps 1 on page 13-55 through 5 on page 13-55 of the previous
procedure to display the Database and Log area.
2. In the Log area on the Select Action list, select Create Space Trigger.
3. Specify values for the field for the automatic log expansion trigger. Use the
online help to obtain details about the operation of each field and setting. The
key fields to set for your environment are the Begin expansion at this
percentage of capacity, Expand the log by this amount, and Maximum size
fields.
4. Click OK to create the trigger.
Results
You can also create a log space trigger by using the define spacetrigger log
command. For example, the following command creates a trigger that increases the
size of the recovery log by 25% when it reaches 85% of its capacity with no limit
on maximum size:
define spacetrigger log fullpct=85 spaceexpansion=25 maximumsize=0
For example, the following sample command backs up the Netezza database by
using Tivoli Storage Manager:
nzbackup -db myDb -connector tivoli -connectorArgs
"TSM_PASSWD=password"
For example, the following sample command restores the Netezza database by
using Tivoli Storage Manager:
nzrestore -db myDb -connector tivoli -connectorArgs
"TSM_PASSWD=password"
For example, the following sample script uses the nzhostbackup command to create
a host backup in the specified /tmp archive and then sends the backup to the Tivoli
Storage Manager server:
#!/bin/bash
#
# nzhostbackup_tsm - back up the host catalog and send it to TSM server
archive="/tmp/nzhostbackup.tar.gz"
(
nzhostbackup "${archive}"
echo
echo "Sending host backup archive ’${archive}’ to TSM server ..."
dsmc archive "${archive}"
)
exit 0
Similarly, you can create a script to retrieve and reload a host backup from the
Tivoli Storage Manager server:
#!/bin/bash
#
# nzrestore_tsm - restore host backup from TSM using nzhostbackup_tsm
(
dsmc retrieve "${archive}"
echo
echo "Archive ’${archive}’ retrieved, restoring it..."
nzhostrestore "${archive}"
)
fi
exit 0
You can use the IBM Spectrum Protect (formerly Tivoli Storage Manager)
commands and interfaces to create a client node schedule, which is a record that
defines a particular client operation such as a backup or a restore. By using client
schedules, you can automate the backups of data from your Netezza host without
any user operator intervention. You can also automate the restore of data from one
Netezza system to another if you load data to a backup Netezza system regularly.
To use the client scheduler to automate tasks for the Netezza host client, you must
install the BA client package on the Netezza host as described in “Configuring the
Netezza host” on page 13-43.
Tivoli Storage Manager offers two ways to manage client scheduling: the client
acceptor daemon-managed services, and a Tivoli Storage Manager traditional
scheduler. You can use either method to manage client schedules. For details about
configuring and managing the Tivoli Storage Manager client scheduler, see the IBM
Tivoli Storage Manager for UNIX and Linux Backup-Archive Clients: Installation and
User's Guide, which is available from the IBM Support Portal at
http://www.ibm.com/support.
If you create more than one scheduled operation, the Tivoli Storage Manager
scheduler does not support overlapping schedules for operations; that is, one
operation must start and complete before a new operation is allowed to start. If
you create operations with overlapping schedules, the second operation is likely
skipped (does not start) because the first operation is still running. Make sure that
you allow enough time for the first operation to complete before a new operation
is scheduled to run.
Results
Because the script runs as root on the Netezza host, the Netezza user must be set
inside the script by using the NZ_USER variable or specified with the -u user
argument. The user password must be cached by using the nzpassword utility, set
inside the script by using NZ_PASSWORD, or specified by using the -pw password
argument.
Troubleshooting
The following topics describe some common problems and workarounds.
Client/server connectivity
You can check the network connections and configuration settings to ensure that
the IBM Netezza host (the client) can connect to the Tivoli Storage Manager server.
The command prompts for the client user password, and after a successful
authentication, it shows the session details.
During a TSM restore, the following error could occur: Connector exited with
error: 'ANS1245E (RC122) The file has an unknown format. This error indicates
that an administrator had created a backup using a later TSM client (using a newer
TSM API) and tried to restore it using an older TSM client. This is a case where the
older TSM API cannot process backups that were created with the later TSM API,
and there might be compatibility issues. If your restore process fails with this error,
update the TSM client to a newer version and re-run the restore.
The file value specifies the path name of a file on the Netezza system. As a result
of the command, the file is saved in the configured storage pool for the Netezza
client. For the test, you can rename the test file to ensure that the subsequent
retrieval test works.
To restore the single file, use the dsmc retrieve file command:
As a result of the command, the file is retrieved from the storage pool archive and
saved on the Netezza system. For complete descriptions of the Tivoli Storage
Manager commands, arguments, and operations, see the Tivoli Storage Manager
documentation.
Session rejected
An error such as Session rejected: Unknown or incorrect ID entered is probably
a result of one of the following problems:
v The IBM Netezza host is not correctly registered on the Tivoli Storage Manager
server.
v The dsm.sys file on the Netezza host is not correct.
Confirm the information in both configurations and try the operation again.
The Netezza solution has been tested with EMC NetWorker version 7.6 and 8.2.1.
You install the 32-bit or 64-bit NetWorker Client for Linux software on the Netezza
host. The Netezza components communicate with the NetWorker client software to
perform backup and restore operations.
The primary interface for administering the NetWorker server is the NetWorker
Management Console (NMC), a browser-based GUI.
To prepare an IBM Netezza system for integration, complete the following steps.
Procedure
1. Log in to your Netezza system as the nz user.
2. Obtain the following name information about your Netezza system:
v If your system is an HA system, ask your network administrator for your
floating IP address.
v If your system is a standard (non-HA) system, ask for the external DNS
name for the Netezza host.
NetWorker installation
Complete instructions for installing the NetWorker Connector client on the IBM
Netezza host are included in the EMC NetWorker Release Installation Guide. The
section “Linux Installation” provides details about installing the NetWorker client
on the Netezza host, which runs a Red Hat operating system. If your Netezza
system is an HA system, install the software on both hosts.
Before you install the NetWorker client, ensure that the NetWorker server
components are installed and configured.
NetWorker configuration
The following topics describe the basic steps that are involved in configuring
NetWorker server and client software for IBM Netezza hosts. In addition to these
steps, ensure that appropriate storage devices and media pools are configured.
To add the IBM Netezza host NetWorker client to the NetWorker server, complete
the following steps.
Procedure
1. Open a browser and log in to the NMC.
2. Click the Enterprise icon.
3. Choose the applicable server from the list of servers in the left pane.
4. Start the NetWorker Managed Application from the right pane.
5. Click the Configuration icon from the new window.
6. Right click Clients from the left pane and select New from the menu.
7. In Create Client window, type the name of the Netezza host (such as,
hostname.company.com) in the Name text box.
8. Select an appropriate browse and retention policy for the client.
9. Confirm that the Scheduled backup check box is checked. You will provide
further information about scheduled backups later in the configuration.
10. Check the groups to which you are adding the client. You will be creating
more groups later in the configuration.
11. From the Globals (1 of 2) tab, set appropriate value for the Parallelism field.
This parameter controls how many streams the NetWorker client can
simultaneously send in one or more backup operations. For help about
selecting values for this setting, see “Parallelism settings.”
12. Under the Globals (2 of 2) tab, add an entry of form user@client in the
Remote access list for any other client that is allowed to restore backups that
are created by this client.
For example, to allow a backup that is created on Netezza host1
(Netezza-HA-1.netezza.com) to be restored on Netezza host2
(Netezza-HA-2.netezza.com), ensure that the entry nz@Netezza-HA-
2.netezza.com is present in the Remote access list of Netezza host1
(Netezza-HA-1.netezza.com).
13. Click OK to create the Netezza host NetWorker client.
14. If you have a Netezza HA system, also define Netezza host2
(Netezza-HA-2.netezza.com) as a client, and also allow the backups to be
restored on Netezza host1 (Netezza-HA-1.netezza.com). Return to 6 and repeat
the instructions to add host2 as a client and ensure that the entry
nz@Netezza-HA-1.netezza.com is present in the Remote access list of Netezza
host2 (Netezza-HA-2.netezza.com).
Additionally, if you have more than one Netezza system, you might want to
add your other Netezza systems as clients.
Parallelism settings:
Performance might be tuned in the NetWorker environment, depending on the
client/server configurations and usage. Parallelism settings on the client and server
might be set to optimize backup performance (they do not affect restore/recovery
performance).
Whether you use the command line or a template file for scheduled backups,
NetWorker requires NSR_SERVER as a mandatory argument. Specify this argument
either as a part of -connectorArgs in the nzbackup command or as an environment
variable. In instances where both are specified, the command-line argument takes
precedence over the environment variable for NSR_SERVER. When you use the
NSR_SERVER environment variable, always include the name of the NetWorker
server.
There are also optional arguments which the NetWorker Connector supports:
v NSR_DATA_VOLUME_POOL
v NSR_DEBUG_LEVEL
v NSR_DEBUG_FILE
Scheduled backups
This section provides the steps necessary to create and configure backup groups
that are needed to schedule backups. The NetWorker server runs the nzbackup
command automatically after it creates the following objects:
v At least one backup group
v At least one backup command file
v A schedule
A separate command file and associated backup group is required for each
scheduled backup operation. The data from the backup operations that are run by
using one specific command file form a backup group. For example, if you have
two databases, DBX and DBY, and you want to schedule weekly full backups plus
nightly differential backups for each, you must create four command files, one for
each of four backup groups.
You must add a backup group, specifically associated with each nzbackup
operation, to the list of groups in the NMC. To add a backup group to a given
server, complete the following steps.
Procedure
1. Open a browser and log into the NMC.
2. Click the Enterprise icon.
3. Choose the applicable server from the list of servers in the left pane.
4. Start the NetWorker Managed Application from the right pane.
5. On the Configuration page, right click Groups from the left pane and select
New from the menu.
6. Type a name for the new group (such as nz_db1_daily) in the Name text box.
You can also enter text in the Comment text box.
7. To enable automatic scheduled backups for the group, supply the values for
Start time and Autostart.
8. Click OK to create the group.
Command file:
For each nzbackup operation, you must create a specific command file that contains
the backup command instructions. Logged in as the root user, create the command
files under the directory /nsr/res, and name each file [backup_group].res by
using any text editor. Include content like that in the following example. Content
varies depending on backup operation instructions:
type: savepnpc;
precmd: "/nz/kit/bin/nzbackup -u <userid> -pw <password> -db <name_
of_database_to_backup> -connector networker -connectorArgs NSR
_SERVER=server_name.company.com -v”;
pstcmd: "echo bye", "/bin/sleep 5";
timeout: "12:00:00";
abort precmd with group: No;
To enable scheduled IBM Netezza backup operations for a Netezza host, complete
the following steps.
Procedure
1. Open a browser and log in to the NMC.
2. Click the Enterprise icon.
3. Choose the applicable server from the list of servers in the left pane.
4. Start the NetWorker Managed Application from the right pane.
5. From the Configuration page, select Clients in the left pane, which populates
the right pane with a list of clients.
6. Right-click the applicable client and select Properties from the menu.
7. Ensure that the Scheduled backup check box is checked.
8. In the Group section, ensure that only the group for this backup operation is
checked (such as nz_db1_daily).
9. Select the schedule from the Schedule list.
10. On the Apps and Modules tab, type savepnpc in the Backup command text
box.
11. Click OK to create the scheduled backup.
Redirect an nzrestore:
To redirect restore operations from one IBM Netezza host to another (that is to
restore a backup set that is created by one Netezza host to a different Netezza
host), you must configure Remote access as described in step 12 on page 13-63 of
“Adding the Netezza host NetWorker client” on page 13-63. By default, NetWorker
server does not allow a client access to objects created by other clients.
To restore a backup set onto host2 that was created on host1, log in to host2 and
run the following command:
/nz/kit/bin/nzrestore -db database -npshost host1 -connector networker
The database value is the name of the database that was backed up from the
Netezza host host1.
For example, the following sample script uses the nzhostbackup command to create
a host backup in the specified /tmp archive and then sends the backup to the
NetWorker server:
#!/bin/bash
#
# nzhostbackup_nw - back up the host catalog and send it to NetWorker server
archive="/tmp/nzhostbackup.tar.gz"
# Main script execution starts here
(
nzhostbackup "${archive}"
echo
You can also create a script to retrieve and reload a host backup from the
NetWorker server:
#!/bin/bash
#
# nzrestore_tsm - restore host backup from TSM using nzhostbackup_tsm
# Main script execution starts here
(
archive="/tmp/nzhostbackup.tar.gz"
echo "Restoring the specified backupset ’${archive}’ from NetWorker Server ..."
recover -a -s $NSR_SERVER "${archive}"
echo "Performing Host restore"
nzhostrestore "${archive}"
)
exit 0
NetWorker troubleshooting
This section contains troubleshooting tips to solve common problems.
For other help, see the troubleshooting section of the NetWorker Administration
Guide and the NetWorker Error Message Guide.
Basic connectivity
For problems with basic connectivity, first check that the server and client are
correctly set up and configured. Also, confirm that the clocks on both the server
and client are synchronized to within a few seconds.
Use the save and recover NetWorker commands to back up and restore a normal
file. If either command fails, the basic configuration is incorrect.
If you get the following error, verify that the correct host name or IP value is
specified in NSR SERVER and that the NetWorker service is running on the
specified host.
nwbsa is retryable error: received a retryable network error (Severity 4
Number 12): Remote system error or nwbsa set option: an entry in the environment
structure is invalid (NSR SERVER=[server]) during connector initialization
If you get the following error, the client might not be added to the server or might
not be correctly configured on the server.
nwbsa is retryable error: received a network error (Severity 5 Number 13):
client '[client]' is not properly configured on the NetWorker Server
The system saves history data in a history database. You can create any number of
history databases, but only one history databases can be written to at a time.
Note: All dates and times stored in a history database use the GMT timezone.
What type of data is to be collected, which user account is to be used for data
collection, how often the collected data is to be loaded to the database, and other
criteria are specified by a history configuration. Only one history configuration is
active at a time.
Related concepts:
“Query status and history” on page 12-30
You can use the system views, _v_qrystat and _v_qryhist, to view the status of
queries that are running and the recent query history.
An audit database is more secure than a query database, but this improved
security comes at a cost:
v With an audit database, queries on the history tables are subjected to row-level
security checks. This decreases performance compared to a query database.
v Because an audit database uses row-secure tables, you must configure
multi-level security. A query database does not use row-secure tables and so
does not require multi-level security.
v If an audit database is used and the history data staging area exceeds its
STORAGELIMIT value, the system stops, and the administrator must free up
History Available
database since
version release Changes
1 4.6 None. This is the original version.
2 7.0.3 Several of the history tables and views were updated to include
schema information and a timezone offset. For history tables and
views that use the GMT timezone, the tzoffset field was added
to record the timezone offset, in minutes, for the source system
relative to GMT. For example, Eastern Daylight Time (EDT) is
four hours earlier than GMT, so the timezone offset is +240
minutes.
3 7.1 Several of the history tables and views were updated to include
fields to record client information:
User ID
The user ID under which the client is running.
Application name
The name of the client.
Workstation name
The host name of the workstation on which the client
runs.
Accounting string
The value of the accounting string from the client
information that is specified for the session.
If you upgrade to a new release, you can continue to use an earlier version of the
history database, or you can create a new version of the history database to take
advantage of the new fields and records.
History data is collected continually in a staging area. It is loaded from the staging
area into the history database at intervals determined by settings of the active
history configuration.
History-data staging
After you enable history-data collection, the Netezza system starts the
history-data collection process (alcapp). This process captures history-data
and saves it in a staging area, which is in the $NZ_DATA/hist/staging
History-data files
Each history-data directory (that is, the staging, loading, or error directory)
contains zero or more batch directories. Each batch directory typically contains a
set (or batch) comprising the following files:
v One file with the name CONFIG-INFO, which is a text file that contains the name
of the history configuration that was active when the history data was collected.
v Several files with names of the form alc_id_$TIMESEQUENCE, where id is a
two-letter code that indicates the type of history data in the file:
co Column access data.
fa Failed authentication data.
le Log entry data.
pe Plan epilog data.
pp Plan prolog data.
qe Query epilog data.
This information will be required later when you create history databases and
history configurations.
For example, the following command creates a history database with the name
histdb:
[nz@nzhost ~]$ nzhistcreatedb -d histdb -t query -v 1 -u jones
-o smith -p password123
This operation may take a few minutes. Please wait...
Creating tables .................done
Creating views .......done
Granting privileges ....done
History database histdb created successfully !
Related reference:
“The nzhistcreatedb command” on page A-21
Use this command to create a history database including all the tables, views, and
other objects needed to collect history data.
You can define several history configurations, each of which collects a different set
of history data. For example, you might have a different configuration to collect
each of the following types of information:
v All possible history data. You might record this level of information when you
introduce a new application or a new group of users, or when troubleshooting
service issues.
v Basic history information that you use during routine operational periods.
v Detailed information about a specific area, such as table access. You can use this
information to identify tables that might be unused and are candidates for
cleanup.
The configuration name, user name, and database name are identifiers and can be
enclosed in double quotation marks. For example: "sample configuration",
"sample user", and "sample qhist db" are all valid names.
For each history database, create at least one history configuration that specifies
the parameter HISTTYPE NONE. Setting this configuration to be the active
configuration disables the collection of history data and automatically sets the
following default values:
v CONFIG_LEVEL to HIST_LEVEL_NONE
v CONFIG_TARGETTYPE to HIST_TARGET_LOCAL
v CONFIG_COLLECTFILTER to COLLECT_ALL
For example, the following command creates a history configuration named
hist_disabled that can be used to disable history collection:
SYSTEM.ADMIN(ADMIN)=> CREATE HISTORY CONFIGURATION hist_disabled HISTTYPE
NONE;
The following history configuration settings determine when the loading process
loads history data from the staging area into the database:
LOADINTERVAL
A loading timer that can range from 1 - 60 minutes. Specify 0 to disable the
timer. When the load interval timer expires, the system checks for captured
data in the staging area. Based on the values of the staging size threshold
values, LOADMINTHRESHOLD and LOADMAXTHRESHOLD, and
whether the loader is idle, the data in the staging area might or might not
be transferred to the loader.
LOADMINTHRESHOLD
The minimum amount of history data, in megabytes, to collect before it
transfers the batch to the loading area. Specify 0 to disable the minimum
threshold check.
LOADMAXTHRESHOLD
The maximum amount of history data to collect in the staging area before
it is automatically transferred to the loading area. Specify 0 to disable the
maximum threshold check.
These settings, called loader settings, can have zero or non-zero values, but at least
one setting must be non-zero. The following table describes the valid value
combinations:
Choose loader settings that balance the need for current history data with the need
avoid unduly affecting the system. History data that is still in the staging area is
stored in external text files. If necessary, users can review those files to obtain
information about recent activity before that information is loaded into the history
database.
Depending on the loader settings, how much history data is collected, and the
overall utilization of the system, the alcloader process might become busy loading
history data. If there are several batch directories in the loading area, this might
indicate queued and waiting load requests. You can experiment with different
loader settings to tune it for optimal operation.
Users who run reports against the history data require List and Select privileges
for the history database. Users who create their own tables and views also require
Create Table or Create View privileges.
If you have several users who require access, consider creating a user group to
manage the necessary privileges.
CAUTION:
If your NPS system uses Kerberos authentication for database user accounts,
note that you could encounter problems with loading query or audit history. The
load user account must be active and authenticated with a valid Kerberos ticket
for the loads to run. Kerberos tickets typically expire daily, and the history loads
will fail if the load user's ticket is expired. As a possible workaround, you could
create a scheduled job (such as a cron job) to automatically renew the ticket for
the load user each day to keep an active ticket in place on the system.
Related concepts:
Chapter 11, “Security and access control,” on page 11-1
This section describes how to manage IBM Netezza database user accounts, and
how to apply administrative and object permissions that allow users access to
databases and capabilities. This section also describes user session controls such as
row limits and priority that help to control database user impacts on system
performance.
Procedure
1. Log in to the Netezza system as the nz user.
2. If the new owner does not already have a user account, create one. For
example:
nzsql -c "create user sam with password ’sk3nk’"
3. Grant the List permission to the new owner for the history database. For
example:
nzsql -c "grant list on histdb1 to sam"
4. If you are changing the owner of the database referred to by the active history
configuration:
a. Switch to a different history configuration that disables history-data
collection or that specifies a different history database. For example:
nzsql -c "Set history configuration histdb1_off"
b. Activate the newly set history configuration by stopping and restarting the
system, that is, by issuing the nzstop and nzstart commands.
5. Change the owner of the history database. For example:
nzsql c "alter database histdb1 owner to sam"
6. In each of the non-active history configurations for the database, change the
database owner and password. For example:
nzsql -c "alter history configuration histdb1_plan user sam password ’sk3nk’"
nzsql -c "alter history configuration histdb1_col user sam password ’sk3nk’"
7. To activate a changed history configuration:
a. Set the changed history configuration to be the active configuration. For
example:
nzsql -c "set history configuration histdb1_plan"
b. Activate the changed history configuration by stopping and restarting the
system, that is, by issuing the nzstop and nzstart commands.
Restriction: You cannot change the settings for the active configuration. For
example, if the active configuration is histdb1_plan:
1. Switch to a different history configuration. For example:
nzsql -c "set history configuration histdb1_col"
2. Activate the new history configuration (histdb1_col) by stopping and restarting
the system, that is, by issuing the nzstop and nzstart commands.
3. Change the settings for the formerly active history configuration (histdb1_plan)
using the ALTER HISTORY CONFIGURATION command.
4. Set the changed history configuration to be the active configuration again. For
example:
nzsql -c "set history configuration histdb1_plan"
If you want to drop the active configuration, you must first set to a new
configuration and restart the IBM Netezza software, then you can drop the
non-active configuration. As a best practice, you should not drop the configuration
until the loader has finished loading any captured data for that configuration.
To verify whether there are any batches of history data for the configuration that
you want to drop, complete the following steps.
Procedure
1. Open a shell window to the Netezza system and log in as the admin user.
2. Change to the /nz/data/hist directory.
3. Use a command such as grep to search for CONFIG-INFO files that contain the
name of the configuration that you want to drop. For example:
grep -R -i basic .
Results
These messages indicate that there are batches in the loading and staging areas
that use the BASIC_HIST configuration. If you drop that configuration before the
batch files are loaded, the loader classifies them as errors when it attempts to
process them later. If you want to ensure that any captured data for the
configuration is loaded, do not drop the configuration until after the command in
step 3 on page 14-12 returns no output messages for the configuration that you
want to drop.
For details about the command, see the DROP HISTORY CONFIGURATION
command syntax in the IBM Netezza Database User’s Guide.
To access the Query History Configuration dialog, select Tools > Query History
Configuration in the menu bar. The Configuration Name list lists all history
configurations:
v To display the settings for a configuration, select the configuration from the list.
v To change which configuration is the current (active) configuration, select a
configuration and click Set as Current. This change will not take effect until you
restart the Netezza system.
v To create a new history configuration, enter a new configuration name and
supply the information for the required fields. These fields correspond to the
parameters described in the CREATE HISTORY CONFIGURATION command.
v To edit a configuration, select it and modify its settings as needed.
v To edit the current configuration, you must first select a different configuration
and set it to be the current configuration. After editing the former current
configuration, set it to be the current configuration again. Changes made in this
way will not take effect until you restart the Netezza system.
Restriction: Do not change, drop, or modify these views or tables, because doing
so can cause history-data collection to stop working.
The audit history views use row-level security to restrict access to the audit
information. Each has a name of the form $v_sig_hist_*. Each has the same
columns as its corresponding $hist_* table, but also has an additional security label
(sec_label) column that contains the security descriptor string.
Remember: The history user table names use delimited (quoted) identifiers. When
you query these tables, you must enclose the table name in double quotation
marks. For example:
MYDB.SCHEMA(USER)=> select * from "$hist_version";
$v_hist_column_access_stats
The $v_hist_column_access_stats view lists the names of all columns that are
captured during table access and provides some cumulative statistics.
Table 14-2. $v_hist_column_access_stats
Name Description
dbname The name of the database to which the session is connected
schemaname The schema name as specified in catalog.schema.table
tablename The table name of the table
columname The name of the column
$v_hist_incomplete_queries
The $v_hist_incomplete_queries view lists the queries that were not captured
completely. The problem might be that there was a system reset at the time of
logging or because some epilog/prolog is not loaded into the database yet.
Table 14-3. $v_hist_incomplete_queries
Name Description
npsid A unique ID for the IBM Netezza system (This value is
generated as a sequence on the target database where this view
is defined.)
npsinstanceid The instance ID of the nzstart command for the source Netezza
system
opid Operation ID, which is used as a foreign key from query epilog,
overflow and plan, table, column access tables.
logentryid This ID and the NPS ID (npsid) and instance ID (npsinstanceid)
form a foreign key into the hist_log_entry_n table.
sessionid The session ID (which is NULL for a failed authentication).
dbname The name of the database to which the session is connected.
queryid The unique checksum of the query.
query The first 8 KB of the query text.
submittime The time the query was submitted to Postgres.
client_user_id The user ID of the application that submitted the query. This
field is available only in database version 3 or later.
client_application_name The name of the application that submitted the query associated
with the plan. This value is specified for the session, and is
usually set by an application. This field is available only in
database version 3 or later.
client_workstation_name The host name of the workstation on which the application that
submitted the query associated with the plan runs. This value is
specified for the session, and is usually set by an application.
This field is available only in database version 3 or later.
client_accounting_string The value of the accounting string. This value is specified for
the session, and is usually set by an application. This field is
available only in database version 3 or later.
status The Query Completion status (as integer and text string).
verbose_status
queuetime The amount of time the query was queued (as interval and in
queued_seconds seconds).
preptime The amount of time the query spent in "prep" stage (as interval
prep_seconds and in seconds).
gratime The amount of time that the query spent in GRA (as interval
gra_seconds and in seconds).
numplans The number of plans generated.
numrestarts The cumulative number of times the plans were restarted.
client_user_id The user ID under which the application that submitted the
query. This field is available only in database version 3 or later.
client_application_name The name of the application that submitted the query associated
with the plan. This value is specified for the session, and is
usually set by an application. This field is available only in
database version 3 or later.
client_workstation_name The host name of the workstation on which the application that
submitted the query associated with the plan runs. This value is
specified for the session, and is usually set by an application.
This field is available only in database version 3 or later.
client_accounting_string The value of the accounting string. This value is specified for
the session, and is usually set by an application. This field is
available only in database version 3 or later.
$v_hist_table_access_stats
The $v_hist_table_access_stats view lists the names of all the tables that are
captured during table access and provides some cumulative statistics.
$hist_column_access_n
The $hist_column_access_n table records the column access history for a query.
This table becomes enabled whenever history type is Column.
Table 14-7. $hist_column_access_n
Name Type Description
npsid integer This value along with the npsInstanceId and opid
form the foreign key into the operation table.
npsinstanceid integer Instance ID of the source IBM Netezza system
opid bigint Operation ID. This ID is used as a foreign key
from query epilog, overflow and plan, table,
column access tables to query prolog.
logentryid bigint This ID and the NPS ID (npsid) and instance ID
(npsinstanceid) form:
v A foreign key into the hist_log_entry_n table
v A primary key for this table
seqid integer A plain sequence number of the entry. It starts at
zero for every npsid, npsinstanceid, and opid. It
increments monotonically for table access records
for each query.
sessionid bigint Session ID. This ID with NPS ID (npsid) and
instance ID (npsinstanceid) form the foreign key
from query, plan, table, and column access tables
into session tables.
dbid bigint OID of the database where the table is defined
dbname nvarchar(128) The name of the database where the table is
defined
schemaid bigint The OID of the schema as specified in
catalog.schema.table
schemaname nvarchar(128) The schema name as specified in
catalog.schema.table
$hist_failed_authentication_n
The $hist_failed_authentication_n table captures only the failed authentication
attempts for every operation that is authenticated. A successful authentication
results in a session creation. A failed authentication does not result in a session
creation, but it instead creates a record with a unique operation ID in this table.
Table 14-8. $hist_failed_authentication_n
Name Type Description
npsid integer IBM Netezza ID for the source system whose
data is captured in this table.
npsinstanceid integer Instance ID of the nzstart command for the
source Netezza system.
logentryid bigint A foreign key into the hist_log_entry_n table with
the IBM Netezza ID (npsid) and instance ID
(npsinstanceid).
clientip char(16) IP address of the client that made the connection
attempt.
sessionusername nvarchar(512) The name string for the session user ID
(sessionUserId).
time timestamp The GMT timestamp that indicates when the
operation occurred.
failuretype integer One of the following codes that represent the
authentication failure type:
v 1 = failed authentication because of bad
username, password, or both
v 2 = failed authentication because of
concurrency
v 3 = failed authentication because of user access
time limits
v 4 = user account that is disabled after too
many failed password attempts
failure varchar(512) The text message for the failure type code.
$hist_log_entry_n
The $hist_log_entry_n table captures the log entries for the operations performed.
It shows the sequence of operations that are performed on the system. This table is
not populated if history collection is never enabled or if hist_type = NONE.
Table 14-9. $hist_log_entry_2
Name Type Description
npsid integer Netezza ID for the source system whose data is
captured in this table
npsinstanceid integer The instance ID of the nzstart command for the
source Netezza system
logentryid bigint Sequential ID of the operation in the source
Netezza system. This ID and the NPS ID (npsid)
and instance ID (npsinstanceid) form the primary
keys for this table.
sessionid bigint The session ID. This is NULL for a failed
authentication.
op integer An operation code, which can be one of the
following codes:
v OP_SESSION_CREATE = 1
v OP_SESSION_LOGOUT = 2
v OP_FAILED_AUTH = 3
v OP_QUERY_PROLOG = 4
v OP_QUERY_EPILOG = 5
v OP_PLAN_PROLOG = 6
v OP_PLAN_EPILOG = 7
time timestamp The GMT timestamp for this operation.
tzoffset integer The timezone offset in minutes. This field is
available only in database version 2 or later.
$hist_nps_n
The $hist_nps_n table describes each source IBM Netezza system for which history
is captured in the target database. When a Netezza system connects to a history
database for the first time, a record is added to this table.
Table 14-10. $hist_nps_n
Name Type Description
npsid integer A unique ID for the Netezza system, and the
primary key for this table (This value is
generated as a sequence on the target database
where this table is defined.)
uuid char(36) UUID of the Netezza system, which is a unique
ID (generated on the source Netezza system)
serverhost varchar(256) Host name of the source Netezza system
serverip char(16) IP address of the source Netezza system
$hist_plan_epilog_n
The $hist_plan_epilog_n table records the plan history information. This data is
collected at the end of the plan execution. This table becomes enabled whenever
history type is Plan.
Table 14-11. $hist_plan_epilog_n
Name Type Description
npsid integer This value along with the npsInstanceId and opid
form the foreign key into the query table.
npsinstanceid integer Instance ID of the source IBM Netezza system.
opid bigint Operation ID. Used as a foreign key from query
epilog, overflow and plan, table, column access
tables to query prolog.
logentryid bigint This ID is a foreign key into the hist_log_entry_n
table with the npsid and npsinstanceid. This ID
with npsid and npsinstanceid is also a primary
key for this table.
sessionid bigint Session ID. This ID with npsid and npsinstanceid
is the foreign key from query, plan, table, and
column access tables into session tables.
planid integer The plan ID (used to make an equi join in
addition to npsid, npsinstanceid, and opid to
match a plan prolog to a plan epilog record).
endtime timestamp The ending time of the plan execution.
donesnippets integer The number of snippets that are done.
resultrows bigint The number of rows affected by the SQL query.
This field shows the row count for SELECT,
INSERT, UPDATE, DELETE, and CTAS queries
on user tables, and SELECT, INSERT, UPDATE,
and DELETE queries on system tables or bridge
queries. For all other queries, the value is 0.
resultbytes bigint The number of result bytes.
status integer A status for the success or failure of the plan. The
value is 0 for a successful completion, or a
non-zero error code for a failure.
tzoffset integer The timezone offset in minutes. This field is
available only in database version 2 or later.
$hist_plan_prolog_n
The $hist_plan_prolog_n table records the plan history information. This data is
collected at the beginning of the plan execution. This table becomes enabled
whenever history type is Plan.
$hist_query_epilog_n
The $hist_query_epilog_n table contains the final data that is collected at the end
of the query.
For performance reasons, each row of this table stores approximately 8 KB of the
query string; if the query text overflow cannot fit in one 8 KB row, the table uses
multiple rows that are linked by sequenceid to store the entire query string.
Table 14-14. $hist_query_overflow_n
Name Type Description
npsid integer This value along with the instance ID
(npsInstanceId) and operation ID (opid) form the
foreign key into the operation table.
npsinstanceid integer Instance ID of the source IBM Netezza system.
opid bigint Operation ID. Used as a foreign key from query
epilog, overflow and plan, table, column access
tables to query prolog.
logentryid bigint This ID and the NPS ID (npsid) and instance ID
(npsinstanceid) form:
v A foreign key into the hist_log_entry_n table
v A primary key for this table
sessionid bigint Session ID. This ID and the NPS ID (npsid) and
instance ID (npsinstanceid) form the foreign key
from query, plan, table, and column access tables
into session tables.
sequenceid integer This ID is the sequence ID of each entry. There is
one for each query text fragment. The first
overflow record has sequenceid 0.
next integer This is the pointer to next ID record (the next
8-KB portion of the querytext) in the sequence.
The last record has a next value of -1.
querytext nvarchar(8192) Up to 8 KB of the overflow part of the query
string.
$hist_query_prolog_n
The $hist_query_prolog_n table contains the initial data that is collected at the start
of a query.
A query with a plan, a query without a plan, and a plan without a query all result
in the creation of a record with an operation ID (opid) in the $hist_operation_n
table. The query prolog and epilog, plan prolog and epilog, table access, and
column access for that query or plan all share the same operation ID.
Consequently, this operation ID can be used as a key for joining all query-related
data. The session-related data is retrieved by using the foreign key session ID
(sessionid).
Table 14-15. $hist_query_prolog_n
Name Type Description
npsid integer NPS ID. This ID and the instance ID
(npsinstanceid) and operation ID (opid) form
the foreign key into the operation table.
$hist_service_n
The $hist_service_n table records the CLI usage from the localhost or remote client.
It logs the command name and the timestamp of the command issue. This
information is collected in the history when COLLECT SERVICE is enabled in the
history configuration.
For more information, see the IBM Netezza Advanced Security Administrator's Guide.
Table 14-16. $hist_service_n
Name Type Description
npsid integer This value along with the npsInstanceId and opid
form the foreign key into the operation table.
npsinstanceid integer Instance ID of the source IBM Netezza system
logentryid bigint This ID and the NPS ID (npsid) and instance ID
(npsinstanceid) form:
v A foreign key into the hist_log_entry_n table
v A primary key for this table
sessionid bigint Session ID. This ID is a foreign key into session_n
and is generated by the source Netezza system.
This ID and the NPS ID (npsid) form the foreign
key into session_n.
servicetype bigint The code for the command, which is one of the
following integer values:
v 1 = nzbackup
v 2 = nzrestore
v 3 = nzevent
v 4 = nzinventory (obsoleted in 5.0)
v 5 = nzreclaim
v 6 = nzsfi (obsoleted in 5.0)
v 7 = nzspu (obsoleted in 5.0)
v 8 = nzstate
v 9 = nzstats
v 10 = nzsystem
service varchar(512) The text string of the servicetype value
$hist_session_epilog_n
The $hist_session_epilog_n table stores details about each session when the session
is terminated. Each session completion creates an entry in this table with a unique
operation ID.
$hist_session_prolog_n
The $hist_session_prolog_n table stores details about each created session. Every
successful authentication or session creation adds an entry to this table with a
unique operation ID.
Table 14-18. $hist_session_prolog_n
Name Type Description
npsid integer Unique ID of the source IBM Netezza system.
npsinstanceid integer Monotonically increasing nzstart instance ID of
the source Netezza system.
logentryid bigint This ID and the NPS ID (npsid) and instance ID
(npsinstanceid) form:
v A foreign key into the hist_log_entry_n table
v A primary key for this table
sessionid integer Netezza session ID. This value is not unique for
more than one nzstart command. This ID with
the NPS ID (npsid) and instance ID
(npsinstanceid) form the foreign key from query,
plan, table, and column access tables.
pid integer Process ID of Postgres on source Netezza system.
connecttime timestamp Connection time on the source Netezza system.
priority integer Session priority on the source Netezza system.
maxpriority integer Maximum priority for this session.
sessionuserid bigint User ID that created this session.
currentuserid bigint Current user ID for this session. This ID can be
different from the sessionUserId.
operatinguserid bigint The operating user ID for whom the ACL and
permission is used for validating permissions.
sessionusername nvarchar(128) The session user name that corresponds to
sessionUserId.
$hist_state_change_n
The $hist_state_change_n table logs the state changes in the system. It logs Online,
Paused, Offline, and Stopped.
$hist_table_access_n
The $hist_table_access_n table records the table access history for a query. This
table becomes enabled whenever history type is Table.
Table 14-20. $hist_table_access_n
Name Type Description
npsid integer This value along with the npsInstanceId and opid
form the foreign key into the operation table.
npsinstanceid integer Instance ID of the source IBM Netezza system
opid bigint Operation ID. Used as a foreign key from query
epilog, overflow and plan, table, column access
tables to query prolog.
logentryid bigint This ID and the NPS ID (npsid) and instance ID
(npsinstanceid) form:
v A foreign key into the hist_log_entry_n table
v A primary key for this table
seqid integer A plain sequence number of the entry. It starts at
zero for every npsid, npsinstanceid, and opid. It
increments monotonically for table access records
for each query.
sessionid bigint Session ID. This ID with NPS ID (npsid) and
instance ID (npsinstanceid) form the foreign key
from query, plan, table, and column access tables
into session tables.
dbid bigint OID of the database where the table is defined
dbname nvarchar(128) The name of the database where the table is
defined
$hist_version
The $hist_version table shows information about the schema version number of the
history database.
Table 14-21. $hist_version
Name Type Description
hversion integer Schema version of the history database (see
“History database versions” on page 14-2).
dbtype char(1) Type of the history database:
q A query history database.
a An audit history database.
The following sample query shows how you can use these helper functions.
SELECT
substr (querytext, 1, 50) as QUERY,
format_query_status (status) as status,
tb.tablename,
format_table_access (tb.usage),
co.columnname,
format_column_access (co.usage)
from "$hist_query_prolog_3" qp
inner join
"$hist_query_epilog_3" qe using (npsid, npsinstanceid, opid)
inner join
"$hist_table_access_3" tb using (npsid, npsinstanceid, opid)
inner join
"$hist_column_access_3" co using (npsid, npsinstanceid, opid)
where
exists (select tb.dbname
from "$hist_table_access_3" tb
where tb.npsid = qp.npsid and
tb.npsinstanceid = qp.npsinstanceid and
tb.opid = qp.opid and
tb.tablename in (^nation^, ^orders^, ^part^,
^partsupp^, ^supplier^, ^lineitem^,
^region^))
and tb.tableid = co.tableid;
An IBM Netezza system attempts to run all of its jobs as fast as possible. If only
one job is active on the system, the system devotes all of its resources to
completing that job. If two jobs of equal priority are active, the system gives half of
its available resources to each job. Similarly, if 40 jobs of equal priority are active,
each job receives 1/40th of the available resources. This form of resource allocation
is called a fair-sharing model.
However, when running jobs concurrently you might want the system to prioritize
certain jobs over others. WLM involves classifying jobs and specifying resource
allocation rules so that the system assigns resources based on a predetermined
service policy.
Some Netezza service policies are predefined and cannot be modified. For
example:
v The admin user account has special characteristics that prioritize its work over
the work of other users.
v Certain types of system jobs have a higher priority than user jobs or other,
less-important system jobs.
Related concepts:
“Netezza database users and user groups” on page 11-1
To access the IBM Netezza database, users must have Netezza database user
accounts.
“Session priority” on page 11-38
You can define the default and maximum priority values for a user, a group, or as
the system default. The system determines the value to use when the user connects
to the host and executes SQL commands.
WLM techniques
IBM Netezza offers several techniques for managing resource allocations:
Table 15-1. Workload management feature summary
Technique Description
Scheduler rules Scheduler rules influence the scheduling of plans. Each scheduler rule
specifies a condition or set of conditions. Each time the scheduler
receives a plan, it evaluates all modifying scheduler rules and carries
out the appropriate actions. Each time the scheduler selects a plan for
execution, it evaluates all limiting scheduler rules. The plan is executed
only if doing so would not exceed a limit imposed by a limiting
scheduler rule. Otherwise, the plan waits. This provides you with a way
to classify and manipulate plans in a way that influences the other
WLM techniques (SQB, GRA, and PQE).
Use any combination of these techniques as required by the methodology that you
employ to manage queries.
Scheduler rules
A plan is a unit of work that is created by the optimizer to handle a query. Each
plan is based on the content of a query and on statistics regarding the tables on
which the query acts. A single query usually results in a single plan, but
occasionally additional, auxiliary plans are generated also.
The scheduler places each plan that it receives into a pool of candidates for
execution. When the scheduler detects that there is capacity to execute a plan (for
example, because it was notified that the execution of another plan has completed),
it selects a plan from this pool. Which plan the scheduler selects is determined by
the system's SQB, GRA, and PQE settings, and by the attributes of each plan, for
example:
v Whether it is flagged as being short
v The resource group with which it is associated
v Its priority
A scheduler rule is an object that influences the scheduling of plans. Each scheduler
rule specifies:
v Zero or more conditions
v One action
v Whether it is to apply to admin plans as well as plans submitted on behalf of
other users
Each time the scheduler selects a plan for execution, it evaluates all
limiting scheduler rules. The plan is executed only if doing so would not
exceed a limit imposed by a limiting scheduler rule. Otherwise, the plan
waits.
By deft use of scheduler rules, you can exert a high level of control over plan
execution. A scheduler rule does not apply to admin plans unless you explicitly
specify otherwise when you create the rule.
Scheduler rules are evaluated in alphanumeric order (a-z, A-Z, 0-9) according to
their names. For example, the following rules might both be defined:
When a plan with normal priority is processed, the rule with the name
r2_decrease_normal, which is processed first, causes the priority of the plan to be
decreased from NORMAL to LOW. Then, the rule with the name r3_low_to_normal
causes the priority of the plan to be set back to NORMAL.
Tags
A tag is a string that is associated with and used to refer to a particular session or
plan:
Session tag
A session tag applies to a particular session and to all the plans that are
within the scope of that session. It is set by specifying the ADD TAG
parameter for an ALTER SESSION command.
Plan tag
A plan tag applies to a particular plan. It is set by specifying the ADD
TAG parameter for a CREATE SCHEDULER RULE command.
A condition (but no more than one) of a scheduler rule can refer to a tag to
influence plan scheduling. For example:
v To prevent an end-of-month test from sapping performance, you might add a
tag with the name eom_test to the corresponding session, and create a scheduler
rule with a condition that refers to that tag and that automatically decreases the
priority of any plan that originates from that session:
IF TAG IS eom_test THEN DECREASE PRIORITY
v To prevent a session in which extract, transform, and load (ETL) jobs run from
sapping performance, you might add a tag with the name etl to that session,
and create a scheduler rule with a condition that refers to that tag and that
automatically set the priority of any plan that originates from that session to
LOW:
IF TAG IS etl THEN SET PRIORITY LOW
A scheduler rule can be used to add a tag to each plan that meets the conditions it
specifies. For example, a scheduler rule might add the tag user_is_jill to each
plan for which the associated user is jill:
IF USER IS jill THEN ADD TAG user_is_jill
By creating several scheduler rules each of which adds the same tag to each plan
that meets its conditions, you can specify a series of conditions that behave as if
linked by a logical OR operator. For example, the following three rules, when
evaluated in the order shown, cause the limit to be set to 2 for all plans for which
either the database is reportdb or the user is joe:
IF DATABASE IS reportdb THEN ADD TAG no_more_than_2
IF USER IS joe THEN ADD TAG no_more_than_2
IF TAG IS no_more_than_2 THEN LIMIT 2
Client information
The application that submitted the query that is associated with a plan is called the
plan's client. The following information about the client can be specified for a
session and referenced by a scheduler rule condition:
User ID
The user ID under which the client is running.
Application name
The name of the client.
Workstation name
The host name of the workstation on which the client runs.
Accounting string
The value of the accounting string from the client information that is
specified for the session.
To create a scheduler rule, you must either be the admin user or your user account
must have the Scheduler Rule privilege. The admin user and a user with the
Scheduler Rule privilege can also list, drop, alter, deactivate, or reactivate any rule,
regardless of who created or owns it.
The owner of a scheduler rule is, by default, the user who created it; however,
ownership can be reassigned to a different user. The owner of a scheduler rule can
list, drop, alter, deactivate, or reactivate that rule, regardless of which privileges
that user has been granted.
After you create a scheduler rule, it is active. You can temporarily deactivate an
active scheduler rule, or reactivate a deactivated scheduler rule, by issuing the SET
nzsql command.
To rename a scheduler rule or to change the owner of a scheduler rule, issue the
ALTER SCHEDULER RULE nzsql command.
To delete a scheduler rule, issue the DROP SCHEDULER RULE nzsql command.
When you drop a database or resource group that is defined in a schedule rule, the
system also drops the rule. If a scheduler rule references a resource group that is
altered to remove its resource settings, the scheduler rule is dropped.
The SQL commands are described in the IBM Netezza Database User’s Guide
The net system resources are the system resources that are available to process user
(including admin) jobs, that is, the total system resources minus those resources
that are needed to process special, high-priority system jobs.
A resource group is a Netezza group whose resource settings (that is, its resource
minimum, resource maximum, and job maximum) determine what portion of the
net system resources are to be allocated to plans that are associated with that
group. For example, you might create three different resource groups for the plans
of data analysts, users who produce query reports, and all other users, then
arrange for each of these groups to receive a different fraction of the net system
resources.
Each Netezza system has at least one resource group. This group has the name
Public and cannot be dropped. When a user is created, if the user is not explicitly
assigned to a resource group, the user is assigned to the Public group by default.
You can change a user's resource group assignment.
Each plan that is processed by a Netezza system is associated with exactly one
resource group. If no other resource groups are created for a system, all plans are
associated with the Public group. However, you can create additional resource
groups and associate plans with these groups based on any combination of the
following criteria:
Which user submitted the corresponding job
Each user is assigned to exactly one resource group. Each time a user
submits a job, the plans for that job are automatically associated with the
user's resource group.
You can use scheduler rules to override a plan's resource group association
based on the submitting user. For example, the following scheduler rule
You can use scheduler rules to associate plans with different resource
groups based on the contents of these fields (see “Client information” on
page 15-7). For example, the following scheduler rule associates all plans
for jobs submitted by the application named Cognos with the resource
group rsg12:
IF CLIENT_APPLICATION_NAME IS Cognos THEN EXECUTE AS RESOURCEGROUP rsg12
Cost estimates
For each plan, the optimizer calculates the expected cost of processing that
plan. You can use scheduler rules to associate plans with different resource
groups based on their calculated cost estimates. For example, the following
scheduler rules associate plans with the resource groups with the names
"short", "medium", and "long" based on the plans' cost estimates:
IF ESTIMATE < 4 THEN EXECUTE AS RESOURCEGROUP short
IF ESTIMATE >= 4 ESTIMATE < 30 THEN EXECUTE AS RESOURCEGROUP medium
IF ESTIMATE >= 30 THEN EXECUTE AS RESOURCEGROUP long
The database that is to be accessed
You can use scheduler rules to associate plans with different resource
groups based on which databases the plans access. If different tenants
access different databases exclusively, you can use this capability to
manage resource allocation based on tenancy. For example, the following
scheduler rules associate plans with different resource groups based on
which database they access:
IF DATABASE IS dbx1 THEN EXECUTE AS RESOURCEGROUP x1
IF DATABASE IS dbx2 THEN EXECUTE AS RESOURCEGROUP x2
The table that is to be accessed
You can use scheduler rules to associate plans with different resource
groups based on which tables the plans access. In this way, a database
administrator can influence resource allocation to applications without
changing the applications themselves. For example, the following scheduler
rule associates a plan with resource group x2 if it accesses table tab1 or
tab2:
IF TABLE IN (tab1,tab2) THEN EXECUTE AS RESOURCEGROUP x2
Custom tags
You can add any number of tags to sessions (see “Tags” on page 15-6). All
the plans that are within the scope of that session receive the same tag.
The resource percentage of a resource group is the percentage of the net system
resources that are made available to that resource group, and is determined by the
resource group's resource minimum and maximum. The system applies the
resource minimum and maximum of each resource group in such a way that the
resource group receives its apportioned share of net system resources.
A resource minimum applies only when the corresponding resource group has a
job pending. When a resource group is idle, its system resources can be used by
other, active resource groups. An active resource group might receive more than its
resource minimum when other resource groups are idle; however, a resource group
cannot receive more than its configured resource maximum.
The system applies the following rules in the order shown to determine how to
assign system resources to the active resource groups:
Table 15-2. Assign resources to active resource groups
Condition Resource allocation rule
The sum of the RESOURCE MAXIMUM The system allocates resources based on the
settings for all active resource groups is <= RESOURCE MAXIMUM settings.
100.
The sum of the RESOURCE MINIMUM The system allocates resources in proportion
settings for all active resource groups is to the RESOURCE MINIMUM settings for
<=100. each resource group, but the allocations are
limited by the RESOURCE MAXIMUM
settings. Any excess resources are allocated in
proportion to the difference between the
allowed resources and the RESOURCE
MAXIMUM settings.
The sum of the RESOURCE MINIMUM The system allocates resources in proportion
settings for all active resource groups is > to the normalized RESOURCE MINIMUM
100. settings for each resource group.
Related concepts:
Chapter 11, “Security and access control,” on page 11-1
This section describes how to manage IBM Netezza database user accounts, and
how to apply administrative and object permissions that allow users access to
databases and capabilities. This section also describes user session controls such as
row limits and priority that help to control database user impacts on system
performance.
You can create the analysts and rptquery groups and can alter the resource
maximum of the public group either by using the NzAdmin tool or by issuing the
following nzsql commands:
CREATE GROUP analysts WITH RESOURCE MINIMUM 50;
CREATE GROUP rptquery WITH RESOURCE MINIMUM 30 RESOURCE MAXIMUM 60;
ALTER GROUP public WITH RESOURCE MAXIMUM 80;
When all three resource groups are running jobs on the system, the GRA scheduler
works to balance resource utilization as shown in Figure 15-1 on page 15-13:
The system ensures that members of the analysts resource group get at least 50%
of the net system resources when all the resource groups are active. At the same
time, the system ensures that the member of the rptquery and public resource
groups are not starved for resources.
The system frequently adjusts the resource percentages that are based on the
currently active resource groups and their plans. Because work is often submitted
and finished quickly, at any one time it might appear that a particular resource
group is not receiving resources (because it is inactive) while other resource groups
are monopolizing the system (because they are continually active). However, over
time, and especially during peak times when all resource groups are active, the
actual resource percentage of a resource group usually averages out to its
calculated resource percentage. The measure of whether a resource group is
receiving its resource percentage is called compliance. The system provides several
reports that you can use to monitor compliance.
Related concepts:
“Monitoring resource utilization and compliance” on page 15-17
For example, if all three of the resource groups described in Figure 15-1 on page
15-13 are busy, and if the analysts group has:
v One active plan, that plan receives all the resources allocated to that group (50%)
v Two active plans that have the same priority, they each get half of the resources
allocated to that group (25% each)
v Ten active plans that all have the same priority, each plan gets one-tenth of the
resources allocated to that group (5% each)
If the concurrent plans have different priorities, the system allocates the resources
within the group by using priority weighting factors as described in “Priority
weighting and resource allocation” on page 15-25.
The following figure illustrates how a busy analysts users group can result in its
plans being given fewer overall system resources per plan than a single active plan
in either the rptquery or public groups, even though those groups have lower
overall resource minimums.
Any additional plans associated with that resource group are queued until the
active plans finish.
The JOB MAXIMUM attribute of a resource group also controls the number of
actively running plans that are submitted by that group, but this attribute is
deprecated. The JOB MAXIMUM attribute can have the following values:
v A value of 0 (or OFF) specifies that the group has no maximum for the number
of concurrent plans. The group is restricted by the usual system settings and
controls for concurrent plans.
v A value of 1 - 48 to set the job maximum to the specified integer value.
v A value of -1 (or AUTOMATIC) specifies that the system calculates a job
maximum value that is based on the group's resource minimum multiplied by
the number of GRA scheduler slots (default 48). For example, if a group has a
resource minimum of 20%, the job maximum is (.20 * 48) or approximately 9.
For example, when a plan for a job submitted by the admin user is active, the
resource allocations described in Figure 15-1 on page 15-13 change as shown in
Figure 15-3 on page 15-16.
The admin user account is a superuser account that is intended for emergency
actions only and not for everyday use. Few users should ever run jobs as the
admin user, and they should do so infrequently and only for urgent operations. As
an alternative to allowing users to use the admin user account, create a resource
group that has some or all of the object and administrative privileges of the admin
user and add the appropriate users to it. The resource minimum and maximum of
this resource group will determine its impact on resource availability.
Related concepts:
“Default Netezza groups and users” on page 11-3
The IBM Netezza system has a default Netezza database user named admin and a
group named public.
Related tasks:
“Creating an administrative user group” on page 11-17
GRA compliance
The GRA scheduler tracks resource usage to ensure that each resource group
receives its minimum allocation of resources when all groups are actively using the
system. The measurement of how well a group receives its configured resource
allocation is called compliance.
The IBM Netezza measures compliance by measuring the work statistics for each
job that is completed on the system. The GRA scheduler tracks the work statistics
for each resource group and divides the amount of resources used by a resource
group over the total amount of resources used by all of the resource groups during
that time. The GRA scheduler uses the resulting actual use percentage to determine
whether a group is in compliance or whether it is overserved or underserved. The
resource percentage is the amount of resources that a group is allocated, based on its
minimum and maximum resource settings and the activity of other resource
groups on the system.
The GRA scheduler uses the compliance values to rank the groups from very
underserved to very overserved. If a group is underserved, the GRA scheduler
chooses the underserved group's work ahead of an overserved group's work.
The GRA scheduler calculates compliance over a horizon value; the horizon is 60
minutes by default. The horizon is a moving time window of the last hour's
activity to show compliance. Netezza moves the window every 1/60th of the
horizon (every minute for GRA, and every 10 seconds for the snippet scheduler).
The following sections describe the resource views and NzAdmin reports. For
details on the Netezza Performance Portal reports, see the IBM Netezza Performance
Portal User's Guide or the online help available from the portal interface.
Related concepts:
“Guaranteed resource allocation example” on page 15-13
You can use the following views to monitor GRA and snippet scheduling data:
_v_sched_gra_ext and _v_sched_sn_ext
These views display information about how busy the system is and how
GRA and snippet resources are being allocated and used by the recent jobs
on the system. After each report interval, the system adds a row for each
active group with its resource compliance totals for that period. If a group
is not active, the system does not create a row for that group. For a system
with few resource groups, the _v_sched_gra_ext view typically contains
records for about a week of activity, and the _v_sched_sn_ext view
typically contains a few hours of data. These views are reset when the
system stops and restarts.
The update intervals for the following views are specified by configuration settings
that you can adjust as described in “Changing configuration settings” on page
7-19:
v _v_sched_gra_ext
v _v_sched_sn_ext
v _v_system_util
v _v_sched_sys
To display the information shown by a view, issue a SQL command for the form:
SELECT * FROM viewname;
Note: The view output might refer to an _ADMIN_ resource group. This is the
default group for the admin user account and cannot be modified.
The previous horizon summaries display in descending order. Review this window
to see the actual resource percentages for that hour, and a snapshot of the job
summary status at the conclusion of that hour.
To view the resource allocation performance history, on the toolbar click Tools >
Workload Management > Performance > History.
The following figure shows the Resource Allocation Performance History window.
To view the resource allocation performance history graph, on the toolbar click
Tools > Workload Management > Performance > Graph.
The following figure system displays the Resource Allocation Performance graph.
v The lines for each group show the resource usage trends through the day with
the usage percentage on the left vertical axis.
v The blue shaded background shows the number of jobs that are running at each
time interval with the job count on the right side vertical axis.
v You can use the list to select a different day of resource usage to display.
If no resource group were specified for this CREATE USER command, bob
would be assigned to the public resource group.
v The following CREATE GROUP command creates the analysts user group,
specifies a nonzero resource minimum (which means that the user group is a
resource group), and assigns bob to that group:
CREATE GROUP analysts WITH RESOURCE MINIMUM 50 USER bob;
However, this command does not change bob's resource group assignment,
which remains to the rptusers resource group.
v The following ALTER GROUP command drops bob from the rptusers user
group:
ALTER GROUP rptusers DROP USER bob;
After you assign a user to a resource group, you cannot change the resource
minimum of that resource group to 0. If you drop a resource group, users who are
assigned to it are reassigned to the public resource group.
To remove a user assignment, assign the user to the public resource group. For
example:
ALTER USER bob IN RESOURCEGROUP public;
The cost of a query is the number of seconds required to run it. The optimizer uses
internal mechanisms such as prep snippets to estimate the cost of each query
before it is run. A query is regarded as being a short query if its cost estimate is less
than or equal to the threshold specified by the host.schedSQBNominalSecs setting.
The default setting is two seconds.
Typical short queries are "pick list" queries, dimensional data lookups, and other
quick data lookups. They are often submitted by a business intelligence application
when populating selection lists, or are entered on a SQL command line by a user
who then waits for the results. Typical long queries are complex business
intelligence queries that can return gigabytes or terabytes of results, or queries that
perform complex joins, comparisons, or user-defined analysis. They typically take
many seconds, minutes, or even hours to run. A long query can be entered on a
command line, but is more commonly issued by a business intelligence application
that creates scheduled reports for deep-dives into databases.
When the Short Query Bias (SQB) function is enabled, the system reserves
scheduling and memory resources for short queries. When SQB is disabled, a user
who runs a short query while the system is busy running long queries might
experience a significant delay.
When SQB is enabled and the optimizer determines that a particular query is
short, it sets the SQB flag of each of the plans that is associated with that query to
true. You can use scheduler rules to override the setting of an SQB flag based on
any combination of the following criteria:
You can use scheduler rules to set SQB flags to true or false based on the
contents of these fields (see “Client information” on page 15-7). For
example, the following scheduler rule sets the SQB flag to false for all
plans of jobs submitted by the application named Cognos:
IF CLIENT_APPLICATION_NAME IS Cognos THEN SET NOT SHORT
Cost estimates
You can use scheduler rules to set SQB flags to true or false based on the
calculated cost estimate, regardless of the plan's cost estimate or the
threshold set by host.schedSQBNominalSecs. For example, for each plan for
a query submitted by the user sam, the following scheduler rule effectively
changes the short query threshold to 60 seconds:
IF USER IS sam ESTIMATE < 60 THEN SET SHORT
The database that is to be accessed
You can use scheduler rules to set SQB flags to true or false based on
which databases the plans access. For example, the following scheduler
rule effectively changes the short query threshold to 40 seconds for all
plans that access the database dbx1 or dbx3:
IF DATABASE IN (dbx1,dbx3) ESTIMATE < 40 THEN SET SHORT
The table that is to be accessed
You can use scheduler rules to set SQB flags to true or false based on
which tables the plans access. For example, the following scheduler rules
sets the SQB flag to true for all plans that access the table tab1:
IF TABLE IS tab1 THEN SET SHORT
Custom tags
You can add any number of tags to sessions (see “Tags” on page 15-6). All
the plans that are within the scope of that session receive the same tag.
You can also create scheduler rules that add tags directly to all plans that
meet the conditions specified by the rule. You can then use scheduler rules
to set SQB flags to true or false based on these tags. For example, the
following scheduler rule sets the SQB flag of a plan to false based on
whether one or more of the specified tags have been set for the plan:
Figure 15-7 illustrates the queues and settings used for SQB. In this example:
v The GRA scheduler reserves 10 slots for short queries.
v The snippet scheduler reserves 6 slots for short queries.
v The SPU reserves 50 MB for short query execution.
v The Netezza host reserves 64 MB for short query execution.
A Netezza host
B GRA scheduler
C Snippet scheduler
D Snippet processing unit (SPU)
Note: There are two additional priorities, but these are used exclusively by internal
processes and are not visible to and cannot be set by users:
System critical
This priority is ranked higher than critical.
System background
This priority is ranked lower than low.
Similar to the way the system allocates resources among resource groups based on
their GRA resource minimums, the system allocates resources among the plans in a
resource group based on their priorities. The fraction of the resource group's
resources that are made available to a plan are weighted based on the priority of
the corresponding job:
v Low priority plans have a weight of 1
v Normal priority plans have a weight of 2
v High priority plans have a weight of 4
v Critical priority plans have a weight of 8
Figure 15-8 on page 15-26 illustrates an example of how job priority affects the
distribution of resources within resource groups. In this example, the system uses
the default weighting setting of 1,2,4,8.
Legend:
C Plan for a critical job.
H Plan for a high priority job.
L Plan for a kow priority job.
N Plan for a normal priority job.
Table 15-4. Example of a distribution of resources within resource groups
Per resource group Per plan
Resource Net system Job priority
group resources (weight) Resource group resources Net system resources
Analysts 50% Normal (2) 2÷(8+2) = 20% 50% × 20% = 10%
Critical (8) 8÷(8+2) = 80% 50% × 80% = 40%
RptQuery 30% High (4) 4÷(8+4+4) = 25% 30% × 25% = 7.5%
High (4) 4÷(8+4+4) = 25% 30% × 25% = 7.5%
Critical (8) 8÷(8+4+4) = 50% 30% × 50% = 15%
Public 20% Low (1) 1÷(1+1) = 50% 20% × 50% = 10%
Low (1) 1÷(1+1) = 50% 20% × 50% = 10%
Specifying priorities
When a user opens a database session, that session is assigned a priority. The
session priority determines the priority of all jobs that are submitted during the
session. For example, if a session is assigned the priority HIGH, all jobs submitted
during that session also have the priority HIGH, as will their corresponding plans.
The priority of a particular session (and its jobs and their plans) is determined by
several factors:
v System settings
v Settings for the user who submitted the corresponding job
v Settings for the user groups to which that user belongs, if any
v Settings for the session used to submit the corresponding job
The following settings determine the default and maximum priorities for a session:
v Issue the SET SYSTEM DEFAULT command to set the following parameters:
DEFPRIORITY
The system default priority, which is the priority that is assigned to any
job for which a priority is not set by other means. The default is
NORMAL.
MAXPRIORITY
The system maximum priority, which is the highest level priority that
can be set for any job. The default is CRITICAL.
v For each user, you can issue the CREATE USER or ALTER USER command to
set the following parameters:
DEFPRIORITY
The user's default priority, which is the priority that is assigned to any
job that is submitted by the user.
MAXPRIORITY
The user's maximum priority, which is the highest level priority that the
user can set for any submitted job.
v For each user group, you can issue the CREATE GROUP or ALTER GROUP
command to set the following parameters:
DEFPRIORITY
The group's default priority, which is the priority that is assigned to any
job that is submitted by a member of the user group.
MAXPRIORITY
The group's maximum priority, which is the highest level priority that a
member of the user group can set for any submitted job.
The first of the following rules to apply determines which priority is assigned to a
session:
1. Was a default priority other than NONE assigned to the user? If so, that
priority is assigned to the session.
2. Is the user a member of at least one user group for which a default priority
other than NONE was specified? If so, the lowest of those default priorities is
assigned to the session.
3. The system default priority is assigned to the session.
If the determined priority would exceed the maximum priority for the user, the
maximum priority is assigned instead. The first of the following rules to apply
determines the maximum priority for a particular user:
1. Was a maximum priority other than NONE assigned to the user? If so, that
priority or the system maximum priority, whichever is lower, is the user's
maximum.
2. Is the user a member of at least one user group for which a maximum priority
other than NONE was specified? If so, the lowest of those default priorities and
of the system maximum priority is the user's maximum.
A user can issue the ALTER SESSION command to change the session priority.
This affects all jobs that are currently running and that are submitted during the
remainder of the session.
You can use scheduler rules to assign a priority based on the contents of
these fields (see “Client information” on page 15-7). For example, the
following scheduler rule increases the priority of all plans for jobs
submitted by the application named Cognos:
IF CLIENT_APPLICATION_NAME IS Cognos THEN INCREASE PRIORITY
Cost estimates
For each plan, the optimizer calculates the expected cost of processing that
plan. You can use scheduler rules to assign a priority based on the
calculated cost estimate. For example, the following scheduler rules modify
plans' priorities based on the plans' cost estimates:
IF ESTIMATE < 4 THEN SET PRIORITY NORMAL
IF ESTIMATE >= 4 ESTIMATE < 30 THEN SET PRIORITY HIGH
IF ESTIMATE >= 30 THEN SET PRIORITY LOW
The database that is to be accessed
You can use scheduler rules to assign or change plan priorities based on
which database each plan accesses. For example, the following scheduler
rules decreases the priority of all plans that access database dbx1:
IF DATABASE IS dbx1 THEN DECREASE PRIORITY
The table that is to be accessed
You can use scheduler rules to assign or change plan priorities based on
which table each plans accesses. For example, the following scheduler rule
decreases the priority of all plans that access table tab1 or tab2:
IF TABLE IN (tab1,tab2) THEN THEN DECREASE PRIORITY
Important: Use caution when you are assigning the critical and high priority. If
you assign too many jobs to the high or critical priority, you can bring normal and
low priority work to a standstill.
Related concepts:
“Priority query execution (PQE)” on page 15-24
Within each resource group, use priority query execution (PQE) settings to
prioritize more important jobs over less important jobs. A job's priority is used as a
weighting factor to allocate resources for its corresponding plans.
For example, to change the priority of all jobs of the session with the ID 21664 to
HIGH:
v By issuing the nzsession priority command, enter:
nzsession priority -high -u user -pw password -id 21664
v By issuing the ALTER SESSION command, enter:
MYDB.SCHEMA(USER)=> ALTER SESSION 21664 SET PRIORITY TO HIGH;
v By using the NzAdmin tool, display the Database > Sessions list, right-click
session ID 21664 and select Change Priority > High.
Use the nzsession show command or the NzAdmin tool to show information about
the current sessions and their priorities.
Note: The terms group and table are based on Simple Network Management
Protocol (SNMP) concepts and are not associated with IBM Netezza database
groups or tables.
The following table lists the Netezza core groups and tables that you can view by
using the nzstats command.
Table 16-1. Netezza Groups and Tables
Group/Table Description See
Database Table Provides information about “Database Table” on page 16-2.
databases.
DBMS Group Provides information about the “DBMS Group” on page 16-2.
database server.
Host CPU Table Provides information about each “Host CPU Table” on page 16-3.
host processor.
Host Filesystem Provides information about each “Host File System Table” on page
Table local host file system. 16-3.
Host Interface Provides information about the “Host Interface Table” on page
Table host's interface. 16-4.
Host Mgmt Provides information about the “Host Mgmt Channel Table” on
Channel Table system's management channel page 16-4.
from the host viewpoint.
Host Network Provides information about the “Host Network Table” on page
Table system's main UDP network layer 16-5.
from the host viewpoint.
Host Table Provides information about each “Host Table” on page 16-6.
host.
HW Mgmt Provides information about each “Hardware Management Channel
Channel Table SPU management channel from Table” on page 16-7
the SPU's viewpoint.
Per Table Per Data Provides information about tables “Per Table Per Data Slice Table”
Slice Table on a per-data slice basis. on page 16-8.
Query Table Provides information about active “Query Table” on page 16-8.
queries as obtained from the
_v_qrystat view.
Database Table
If you are the user admin, you can use the nzstats command to display the
Database Table, which displays information about the databases.
DBMS Group
The DBMS Group displays information about the database server.
Host Table
The Host Table displays information about each host on the system.
Per Table Per Data Slice Table has the following columns.
Table 16-11. Per Table Data Slice Table
Column Description
Table Id The ID corresponding to a table.
DS Id The ID corresponding to a data slice.
Disk Space The amount of disk space that is used for this table in this data slice.
Query Table
If you are the admin user, you can use the nzstats command to display the Query
Table, which displays information about the queries currently 'running/executing'
on the IBM Netezza server. Those queries which have completed execution and
whose results sets are being returned to a client user are not listed in this table.
You can use the system view _v_qrystat to view the status of queries that are
running. For more information, see Table 12-8 on page 12-30.
This query table uses the _v_qrystat view for compatibility with an earlier release
and will be deprecated in a future release. For more information, see Chapter 14,
“History data collection,” on page 14-1.
The Query History Table uses the _v_qrystat view for compatibility with an earlier
release and will be deprecated in a future release. For more information, see
Chapter 14, “History data collection,” on page 14-1.
SPU Table
The SPU Table displays information about each SPU's processor and memory.
System Group
The System Group displays information about the system as a whole.
Table Table
If you are the user admin, you can use the nzstats command to display the Table
Table, which displays information about database tables.
System statistics
The nzstats command displays operational statistics about system capacity, system
faults, and system performance. They provide you with a high-level overview of
how your system is running and other details so that you can understand
performance characteristics. You can also use the NzAdmin tool to display
statistics.
To display the System Group table, enter: nzstats show -type system
This tool is installed together with the NPS software kit in the
/nz/kit/share/healthcheck folder. The tool is integrated with the failover and
upgrade processes.
The tool provides a daemon, which is a service that runs in the background. The
daemon can function in two different modes that define how it works. Regardless
of the mode, when you run the nzhealthcheck command, the daemon will produce
a health check report. The report generation may take up to several minutes.
The tool is based on a set of policy rules. The policy rules cover the most frequent
issues that were found by the support teams in the field. Based on these rules,
automatic analysis of the system health is performed, and a possible solution for
recovery is suggested.
Regardless of the mode in which the daemon works, you can run the
nzhealthcheck command which:
v Opens the monitoring database.
v Collects the most recent data from the system components.
v Requests the evaluation of rules.
v Generates and prints the Health Check Report.
Tool versioning
This topic lists all versions of the System Health Check tool and the corresponding
supported versions of NPS.
Table 17-1. Health check tool versions and the corresponding supported NPS versions
Health check tool version Supported NPS version
1.0 v 5.0.10 P19
v 6.0 P13
v 6.0.3 P9
v 6.0.5 P10
v 6.0.8 P1
1.1 v 5.0.10 P19
v 6.0 P13
v 6.0.3 P11
v 6.0.5 P12
v 6.0.8 P2
v 7.0 P1
1.2 v 6.0.5 P14
Policy rules
Policy rules are troubleshooting rules that are used for automatic analysis of
system health and for proposing a relevant solution. These rules cover the most
frequent issues that are experienced in the field.
You can view a complete list of rules in a PDF document that is located on your
system in /nz/kit/share/nzhealthcheck/rules-doc.pdf.
Note: Rule evaluation depends on the platform, so not all rules may be evaluated
on your system. The rules-doc.pdf document provides detailed information about
which platforms are supported by particular rules.
All System Health Check rules provide problem description and advice. Some of
them involve complex logic, for example, identifying the correct
disk/SPU/enclosure balance. The rules may depend on one another.
Daemon modes
The System Health Check daemon can run in two different modes, depending on
how you want to use it.
Diagnostic
The System Health Check daemon by default runs in this mode. In this mode,
system data is collected only after you run the nzhealthcheck command and then
the rules are evaluated using this data. The advantage of the diagnostic mode is
that it produces no background workload because it does not collect any data in
the background.
You can run nzhealthcheck -I at any time to check in which of the modes the
daemon is running:
nzhealthcheck -I
Daemon status: active
Daemon mode: diagnostic
To switch back from the monitoring to the diagnostic mode, run nzhealthcheck -D.
Monitoring
Run the nzhealthcheck -M command to restart the daemon and switch it to the
monitoring mode. In this mode, data is constantly gathered in the background and
the rules are evaluated periodically.
When the daemon runs in the monitoring mode, it only evaluates the automatic
rules. Refer to the rules-doc.pdf document for information about which of the rules
are evaluated automatically and which are evaluated manually.
Note: When you run the nzhealthcheck command, both the automatic and manual
rules are processed, regardless of the mode in which the daemon runs.
The monitoring mode allows for enabling automatic notifications. See “Enabling
automatic notifications” on page 17-5.
Some rules are automatically integrated with the callhome service when the
daemon runs in this mode. See “Integration with the callhome service” on page
17-7.
Procedure
1. Log in as nz user.
2. Run the following command:
nzhealthcheck
Results
The output of the command presents general system information in the section
MINI SYSINFO and information about the issues found in the section Failures. It
may also contain a Non-failed section that informs about the rules that could not be
evaluated because of the lack of data. At the end of the report, you can find a path
to the text file that contains the full version of the health check report. You should
review this file every time you generate the report as it helps you perform the
following actions:
v In the section ISSUES, identify the failed components and find guidance on how
to solve them.
v In the section WARNINGS, identify all observed problems with rules evaluation,
for example lack of data, but also other problems, not listed in the command
output.
You can run the command with the -a parameter to invoke the output with all the
evaluated rules, including the Passed ones.
Sample output:
Netezza System Health Check 2.3
************************************************************************
********************** System Health Check Report **********************
************************************************************************
Failures (4):
L +- Rule --+-------- Issue ---------+---- Component ----+- Severity --
2 | BOM001 | Missing or incorrectly | rack1.chass (...) | High
| | inserted chassis | rack1.chass (...) |
| | component | |
2 | SHC012b | Not enough disk space | rack1.host1 (...) | High
| | on host | |
2 | SHC060 | Serial-Over-LAN is not | ...output omitted | High
2 | SHC012a | Not enough disk space | rack1.host1.fs[/] | Medium
| | on host | rack1.host1 (...) |
L +- Rule --+-------- Issue ---------+---- Component ----+- Severity --
Report stored in
/nz/kit.6.0.8.15-P1/log/nzhealthcheck/Netezza_System_Health_Check_Report_20131219_100040.txt
All done
What to do next
Note: If any INT001 issues are included in the report, the report might be
incomplete. For more information, see “Policy rules” on page 17-2
Remember: Only automatic rules are evaluated in the monitoring mode. Refer to
the rules-doc.pdf document for information about which of the rules are
evaluated automatically and which are evaluated manually.
nzevent is a tool which you can configure to receive notifications about the system
state. You can create and manage rules which then trigger sending notifications.
Procedure
1. Log in to your system as the nz user.
2. Run nzhealthcheck -I to make sure that the daemon is running in monitoring
mode:
nzhealthcheck -I
Daemon status: active
Daemon mode: monitoring
3. To enable notifications from the System Health Check tool, you must create a
custom event rule related to ShcReport:
nzevent add -eventType custom1 -name nzhealthcheck_event -notifyType email -dst <your@address>
-bodyText ’Nzhealthcheck rule triggered on $HOST $body’ -msg ’Nzhealthcheck found problem on $HOST’
-eventArgsExpr ’$originated==ShcReport’
For the notifications to be sent, the daemon must run for at least ten minutes.
By this time, a sufficient amount of data is collected and processed.
4. Optional: Edit the /nz/kit/share/nzhealthcheck/nzhealthcheck.cfg file to
configure the notification settings. Set the EVENTING_MODE parameter to one of
the following values:
EVENT
This is the default value. The notification is sent to the specified address
immediately after a problem is detected by the daemon. The email contains
a list of all failed components, excluding the ones that were already
reported in previous emails. It also includes a newly generated event
report. If the system is unable to send the notification immediately after the
problem is detected, it will reattempt to send it within an hour.
TIME
The notification is sent once a day, at a time specified in the
EVENTING_REPORT_TIME parameter. The time format must be specified as
hh:mm and the default value is 15:00. The notification contains a list of all
Results
The notifications are now enabled and will be sent to the specified email address.
Refer to the “Enable the callhome service” on page 9-8 section for information
about how to set up the callhome service in your system.
When callhome is enabled, three of the System Health Check rules are
automatically integrated with it. When problems related to these rules are detected,
IBM Netezza Support will be notified. The rules are:
v DM012 : Multiple SCSI Log page 0x15 events on disk head
v DM015 : Multiple SCSI Log page 0x15 events on disk
v DM030 : Bad sectors on disk
The callhome functionality requires the System Health Check daemon to work in
the monitoring mode. It is not necessary to enable automatic email notifications in
System Health Check for callhome to work properly.
Procedure
1. Log on as the nz user.
2. Run the command nzhealthcheck sysinfo.
Results
The output of the command is displayed on-screen, including a link to a full txt
report.
Sample output:
Netezza System Health Check 2.3
All done
Note: The daemon is integrated with the HA cluster and will start only if the
cluster is active. This means that it will not start in the maintenance mode. The
start and stop commands are called by the nzinit command. The daemon migrates
as the cluster does.
Procedure
v To check the status of the daemon, use the following command:
service nzhealthcheck status
v If the daemon was stopped for some reason, you start it with the following
command:
service nzhealthcheck start
v If you must stop the daemon manually, run the following command:
service nzhealthcheck stop
Syntax
Inputs
The nzhealthcheck command takes the following input options. The input options
have two forms for the option names.
Table 17-2. The nzhealthcheck input options. The table provides information about the
options that the nzhealthcheck command takes and their explanations.
Option Description
sysinfo Generates the Sysinfo Report.
-a, --detail Shows all rule results.
-d, --daemon Starts the System Health Check daemon.
Description
You use the nzhealthcheck command to generate the System Health Check Report.
You can also generate the Sysinfo Report with additional data about the system
that might be of use when troubleshooting problems. When running the command,
the Netezza system can be in any operational state.
Usage
The following provides some of the command uses and sample syntax:
v To generate the System Health Check Report with all the evaluated rules listed:
nzhealthcheck -a
v To generate the Sysinfo Report:
nzhealthcheck sysinfo
Netezza Support might also ask you to run specific low-level diagnostic commands
to investigate problems. This section provides information about the Netezza user
commands and the Netezza Customer Service commands.
To run these commands from the Netezza system, you must be able to log in as a
valid Linux user on the Netezza system. Most users typically log in as the nz user.
In addition, many commands require that you to specify a valid database user
account and password; the database user might require special privileges, as
described in “Command privileges” on page A-3. Throughout this section, some
command examples show the database user and password options on the
command line, and some examples omit them with the assumption that the user
and password were cached such as by using nzpassword.
Table A-1. Command-line summary
Command Description Reference
nzbackup Backs up an existing For command syntax and more information,
database. see “The nzbackup command” on page 13-11.
nzcontents Displays the revision For command syntax, see “The nzcontents
and build number of all command” on page A-9. For more
the executable files, plus information, see “Software revision levels” on
the checksum of page 7-1.
Netezza binaries.
nzconvert Converts character For command syntax, see “The nzconvert
encodings for loading command” on page A-10. For more
with the nzload information, see the IBM Netezza Database
command or external User’s Guide.
tables.
nzds Manages and displays For command syntax, see “The nzds
information about the command” on page A-10.
data slices on the
system.
nzevent Displays and manages For command syntax, see “The nzevent
event rules. command” on page A-14. For more
information, see Chapter 8, “Event rules,” on
page 8-1.
nzhistcleanupdb Deletes old history For command syntax, see “The
information from a nzhistcleanupdb command” on page A-19. For
history database. more information, see Chapter 14, “History
data collection,” on page 14-1.
Command privileges
Some administrative privileges may be required for certain commands. The
database user account may require one or more of the privileges described in
Table 11-1 on page 11-10 to complete successfully. In addition, operations on objects
such as databases, tables, views, and others may require object privileges as
described in Table 11-2 on page 11-11. As with administrator privileges, specifying
the WITH GRANT option allows a user to grant the privilege to others.
Exit codes
An nz command that completes successfully returns either no exit code or the
number 0 as an exit code. A non-zero exit code indicates that the command failed
due to an error that affected either the command itself or a subcommand. If an nz
command fails, refer to the messages displayed in the command shell window for
more information about the cause of the failure.
For many Netezza CLI commands you can specify a timeout. This is the amount of
time the system waits before it abandons the execution of the command. If you
specify a timeout without a value, the system waits 300 seconds. The maximum
timeout value is 100,000,000 seconds.
Syntax
Inputs
Description
You use the nzcallhome command to enable and disable the callhome service,
configure and remove callhome-related event rules, and as the primary command
for managing the automated notifications to IBM Support for problem conditions
on the IBM Netezza appliance. You can also generate event conditions for testing
the callhome features and operations, and you can use the command to request
software or firmware upgrades of your appliance.
The following provides some of the command uses and sample syntax:
v To enable the callhome service:
nzcallhome -on
v To test that call home can reach the IBM servers:
nzcallhome -verifyConnectivity
Connectivity Verification: Alias: Edge_Gateway_1,
Host: stg-edge-cdt.eventsgslb.ibm.com, IP: 9.11.13.208 : SUCCESS
The IP address is one example of the IBM server address. The IP you use could
be different depending upon your location.
v To disable or stop the callhome service during service or upgrades:
nzcallhome -off
v To generate some test SPU event conditions:
nzcallhome -generatePmr hwSpu hwAttnSpu
Syntax
Inputs
Error Messages
The nzconfigcrypto command can return the following errors for invalid
arguments or settings.
Table A-4. The nzconfigcrypto input options
Message Description
ERROR: LookupCryptoKey: The error message indicates that the supplied host key
object "key" not found was not found. Check the keystore and key name that
you entered to make sure that you specified them
correctly and retry the command with the correct
values. You can use the SHOW KEYSTORE keystore
VERBOSE command to display the names of the keys
and their types in the keystore.
ERROR: New Hostkey can’t be The error message indicates that the key name values
retrieved keystore.keyname are for a key that is not of type AES_256. An AES_256
type key is required for the nzconfigcrypto command.
This command takes several seconds to run and results in multiple lines of output.
Programs with no revisions are either scripts or special binaries
Syntax
Description
Usage
Syntax
Options
For information about the nzconvert options, see the IBM Netezza Database User’s
Guide.
Description
Syntax
Options
Description
Usage
Syntax
Inputs
Description
Usage
Syntax
Inputs
The nzhealthcheck command takes the following input options. The input options
have two forms for the option names.
Table A-9. The nzhealthcheck input options. The table provides information about the
options that the nzhealthcheck command takes and their explanations.
Option Description
sysinfo Generates the Sysinfo Report.
-a, --detail Shows all rule results.
Description
You use the nzhealthcheck command to generate the System Health Check Report.
You can also generate the Sysinfo Report with additional data about the system
that might be of use when troubleshooting problems. When running the command,
the Netezza system can be in any operational state.
Usage
The following provides some of the command uses and sample syntax:
v To generate the System Health Check Report with all the evaluated rules listed:
nzhealthcheck -a
v To generate the Sysinfo Report:
nzhealthcheck sysinfo
Syntax
-d \"MyDbName\"
{-n | --host} NZ_HOST The host name of the system where the database resides.
The default and only value for this option is NZ_HOST.
{-u | --user} user The user account that permits access to the database. The
default is NZ_USER. The user must both be able to access
the history database and must have the Delete privilege for
the history database tables.
-u \"MyUserName\"
{-p | --pw} password The password for the user account that permits access to
the database. The default is NZ_PASSWORD.
{-t | --time} The date and time threshold. All data recorded before this
"<yyyy-mm-dd[,hh:mm[:ss] date and time is deleted. The date (year, month, and day)
]>" is required. The time (hours, minutes, and seconds) is
optional. The default time is 12:00 AM (midnight, start of
day).
-f | --force Do not prompt for confirmation.
-g | --groom Run a groom operation (that is, automatically issue the
GROOM TABLE command) after the cleanup operation
completes. This updates the zone maps for the history data
tables, which improves the performance of queries against
those tables.
-h | --help Display the usage and syntax for the command.
Description
Usage
The following command removes from the history database with the name histdb
any history data that was collected before October 31, 2013, and automatically
grooms the history tables afterward:
A-20 IBM Netezza System Administrator’s Guide
[nz@nzhost ~]$ nzhistcleanupdb -d histdb -u smith -pw password -t
"2009-10-31" -g
About to DELETE all history entries older than 2013-10-31 00:00:00
(GMT) from histdb.
Proceed (yes/no)? :yes
BEGIN
DELETE 0
DELETE 98
DELETE 34
DELETE 0
DELETE 0
DELETE 188
DELETE 188
DELETE 62
DELETE 65
DELETE 0
DELETE 0
DELETE 0
DELETE 503
COMMIT
Related reference:
“The nzhistcreatedb command”
Use this command to create a history database including all the tables, views, and
other objects needed to collect history data.
Syntax
Inputs
-d \"MyDbName\"
{-n | --host} NZ_HOST The host name of the system on which the database
resides. The default and only possible value for this option
is NZ_HOST.
-o \"MyUserName\"
{-p | --pw} password The password for the owner user account. The default is
NZ_PASSWORD.
{-u | --user} user The load user, that is, the user account that is to be used to
load history data into the database. The load user is
automatically granted the privileges that are needed to
perform the corresponding insert operations. The default
load user is the database owner. You cannot specify the
admin user to be the load user.
-u \"MyUserName\"
Outputs
ERROR: GrantRevokeCommand:
group/user "name" not found
ERROR: History database dev The command failed because the specified database name
not created: exists on the system.
nzsql: Password
authentication failed for
user 'name'
ERROR: History database The command failed because the specified owner does not
hist1 not created: have Create Database privileges on the system.
Description
Usage
The following command creates a history database with the name histdb:
[nz@nzhost ~]$ nzhistcreatedb -d histdb -t query -v 1 -u jones
-o smith -p password123
This operation may take a few minutes. Please wait...
Creating tables .................done
Creating views .......done
Granting privileges ....done
History database histdb created successfully !
The command can take several minutes to complete, depending on how busy the
IBM Netezza system is.
Related concepts:
“Creating a history database” on page 14-5
You can create any number of history databases, but the system can write to only
one at a time.
“Maintaining a history database” on page 14-13
A history database grows as it accumulates history information. Plan for routine
maintenance of the history database. Determine the amount of time that you need
to keep the history data before you can archive or delete it (for example, keep only
data for the current and previous month, or the current and previous quarter, or
for the current year only).
Related reference:
“The nzhistcleanupdb command” on page A-19
Issue the nzhistcleanupdb command to delete outdated history information from a
history database.
In the rare situations when a Netezza host server or disk fails, but the SPUs and
their data are still intact, you can restore the /nz/data directory (or whatever
directory you use for the Netezza data directory) from the host backup without the
additional time to restore all of the databases. For more information, see “Host
backup and restore” on page 13-9.
Before you run the nzhostbackup command, you must do one of the following:
v Pause the system.
v Set the NZ_USER and NZ_PASSWORD environment variables to a user who has
permission to pause the system.
v Set NZ_USER to a user who has permission to pause the system, and cache that
user's password.
If you run the nzhostbackup command, then change a user's password and then
run the nzhostrestore command, the old password is replaced.
Inputs
Description
The nzhostbackup command uses the same logic as the nzstart command to
determine the data directory. The command uses the following settings in order:
In addition, if you unset NZ_DIR and NZ_KIT_DIR and then run the nzhostbackup
backup_dir command, the command works because it internally determines the
location of NZ_KIT_DIR, NZ_DIR and NZ_DATA_DIR.
Usage
The nzbackup and nzrestore commands also back up the system catalog and host
data, but in situations where a Netezza host server fails but the SPUs and their
data are still intact, you can use the nzhostrestore command to quickly restore the
catalog data without reinitializing the system and restoring all of the databases.
For more information, see “Host backup and restore” on page 13-9.
Note: After you run nzhostrestore, the system reverts to the mirroring roles (that
is, topology) it had when it was last online.
After you use the nzhostrestore command, you cannot run an incremental backup
on the database; you must run a full backup first.
Syntax
Options
Use caution with this switch; if you are not sure that the catalog versions
are the same, do not bypass the checks. Contact Netezza Support for
assistance.
-D data_dir Specifies the data directory to restore (default /nz/data).
-f Specifies force, which causes the command to accept the defaults for
prompts and confirmation requests. The prompts displays at the
beginning and end of the program.
Restore host data archived Thu May 25 11:24:58 EDT 2006?
(y/n) [n]
Warning: The restore will now rollback spu data to Thu
May 25 11:24:58 EDT 2006. This operation cannot be
undone. Ok to proceed? (y/n) [n]
Description
Notes
If tables are created after the host backup, the nzhostrestore command marks
these tables as “orphaned” on the SPUs. They are inaccessible and consume disk
space. The nzhostrestore command checks for these orphan tables and creates a
script that you can use to drop orphaned user tables.
For example, if you ran the nzhostrestore command and it found orphaned tables,
you would see the following message:
Checking for orphaned SPU tables...
WARNING: found 2 orphaned SPU table(s).
Run 'sh /tmp/nz_spu_orphans.18662.sh' after the restore has completed
and the system is Online to remove the orphaned table(s).
Usage
The nzhw command replaces the nzspu and nzsfi commands. Use this command to
show information about the system hardware and activate or deactivate
components, locate components, or delete them from the system.
Syntax
The -off option turns off the indicator LED for the specified
component or all SPUs and disks.
Note: If the hardware type specified for the command does
not have an LED, the command only displays the location
string for that component.
Description
Failover information
When you use the nzhw command to fail over a component, the command checks
the system and the affected component to make sure that the command is
appropriate before proceeding. Currently, the command operates only on SPUs and
disks.
For example, if you try to fail over an active component that does not have an
available secondary component (such as SPUs that can take ownership of the data
slices that are managed by the SPU that you want to failover, or an active mirror
for the disk that you want to fail over), the command returns an error. Similarly, if
you try to fail over a component that is not highly available, the command returns
an error.
Usage
Syntax
Options
Description
You use the nzkey command to create and manage the authentication keys (AEKs)
for the SED drives in the host and in the storage arrays of the IBM PureData
System for Analytics N3001 appliances. The command logs information when it
runs to the /nz/kit/log/keydb/keydb.log.
The following provides some of the command uses and sample syntax:
v To generate a hostkey:
[root@nzhost-h1 ~]# /nz/kit/bin/adm/nzkey generate -hostkey -file /tmp/my_hostkey
Host key written to file
v To list the key labels:
[root@nzhost-h1 adm]# /nz/kit/bin/adm/nzkey list
hostkey1
hostkey1Old
hostkey2
hostkey2Old
spuaek
spuaekOld
Syntax
Inputs
If you include the -sklm option, the command does not take
any action since keystore passwords are not used when the
system is configured to use IBM Security Key Lifecycle
Manager (ISKLM) for managing AEKs.
Options
Description
You use the nzkeydb command to create a keystore that stores and manages the
current and previous host and SPU AEKs for the IBM PureData System for
Analytics N3001 appliances. The command logs information when it runs to
/nz/kit/log/keydb/keydb.log.
Usage
The following provides some of the command uses and sample syntax:
v To create a new keystore:
Syntax
Options
Description
You use the nzkeybackup command to create a compressed tar file backup of the
key store. The command validates the key store before it creates the backup to
alert you to any problems. You should create a backup of the key store after you
change the AEKs. As a best practice, you should store the backup tar file in a safe
location that is not on the NPS system as a precaution in the event of a disk
problem on your appliance. The command logs information when it runs to
/nz/kit/log/keydb/keydb.log.
Important: Make sure that you control access to the nzkeybackup and the
nzhostbackup compressed tar files because they contain the key store. If access is
not restricted, the contents of the key store could be read by an authorized Netezza
operating system user. Although the key store is encrypted, users who have access
to the backup files could read the key store with the nzkey command.
Usage
Syntax
Options
Description
You use the nzkeyrestore command to replace or restore a key store from a
backup file. Typically you would use this command if your key store was
corrupted or deleted from the NPS host. The NPS system must be in the Stopped
state before you can restore the key store. The command logs information when it
runs to the /nz/kit/log/keydb/keydb.log.
Usage
Syntax
Inputs
Options
Description
You use the nzkmip command to extract, populate, and test the ISKLM server
connection and management for the authentication keys (AEKs) that are used for
the SED drives in the host and in the storage arrays of the IBM PureData System
for Analytics N3001 appliances.
Usage
The following provides some of the command uses and sample syntax:
v To extract a key:
[root@nzhost ~]# /nz/kit/bin/adm/nzkmip get
-uuid KEY-70a07fcc-1a01-4628-979c-bd75fe5e4557
Key Value : t7Nº×nq¦CÃ<"*"ºìýGse»¤;|%
v To test a key:
[root@nzhost ~]# /nz/kit/bin/adm/nzkmip test -label spuaek
-file /tmp/new_spukey.pem
Connecting to SKLM server at tls://1.2.3.4:5696
Success: Connection to SKLM store succeeded
For a complete description of the nzload command and how to load data into the
IBM Netezza system, see the IBM Netezza Data Loading Guide.
Related reference:
“The nzconvert command” on page A-10
Use the nzconvert command to convert between any two encodings, between these
encodings and UTF-8, and from UTF-32, UTF-16, or UTF-8 to NFC, for loading
with the nzload command or external tables.
Syntax
Inputs
The nznpssysrevs command has only a -h input option to display the usage for
the command.
Description
Usage
Syntax
Inputs
Options
Description
Related commands
Usage
Note: Starting in release 6.0, the SQL GROOM TABLE command replaces the
nzreclaim command. The nzreclaim command is now a “wrapper” that calls the
GROOM TABLE command to reclaim space. if you have existing scripts that use
the nzreclaim command, those scripts continue to run, although some of the
options might be deprecated since they are not used by GROOM TABLE. You
should use the GROOM TABLE command in your scripts.
Syntax
Inputs
Options
Description
Related commands
Use the TRUNCATE command to quickly delete all rows in a table without
requiring a GROOM TABLE afterwards.
Usage
On Linux systems, you can use the nzcontents command to display the revision
and build number of all the executable files, plus the checksum of binaries.
Syntax
Description
Usage
Syntax
Inputs
Description
When you run the nzsession abort command, the client manager uses the session
ID to abort the process.
For example, to abort an nzload session ID 2001, the system does the following:
1. The system sends the nzsession abort command to the client manager.
2. The client manager identifies which nzload session to abort.
3. The loadmgr sends the abort signal to the loadsvr and starts the timer.
4. The loadmgr waits the specified timeout value for the loadsvr to abort the
session. The command uses either the default value, or the timeout you specify
on the command line.
The following table describes the nzsession show output information. The admin
user can see all the data for sessions; other users can see all the sessions, but data
for user, database, client PID, and SQL command are hidden unless the user has
privileges to see that data.
Table A-33. Session information
Column Description
ID The ID of the session.
Type The type of session, which can be one of the following:
Client Client or UI session
SQL Database SQL session
Bnr Back up or restore session
Reclaim
Disk reclamation session.
User The name of the session owner.
Start Time The time the session was started.
PID The process identification number of the command you are running.
Database The name of the database.
Schema The name of the schema.
State The state of the session, which can be one of the following:
Idle The session is connected but it is idle and waiting for a SQL
command to be entered.
Active The session is running a command (usually applies to a SQL
session that is running a query).
Connect
The session is connected, but no commands have been issued.
Tx-Idle The session is inside an open transaction block (BEGIN
command) but it is idle and waiting for a SQL command to be
entered within the transaction.
Priority Name The priority of the session, which can be one of the following:
Critical The highest priority for user jobs.
High The session jobs are running on the high priority job queue.
Normal
The session jobs are running on the large or small job queue.
Low The lowest priority for user jobs.
Client IP The IP address of the client system.
Client PID The process identification number of the client system.
Command The last command executed.
Usage
Syntax
Inputs
Options
Description
Usage
Note: You must run nzstart on the host. You cannot run it remotely.
Syntax
Description
Notes
The nzstart script has a default time out, which is 120 seconds + 3* the number of
SPUs. This default is subject to change in subsequent releases.
If the system is not started by this time, the nzstart command returns and prints a
warning message that indicates that the system failed to start in xxx seconds. The
system, however, continues to try to start. You can override the default time-out by
specifying a timeout.
Syntax
Inputs
Options
Description
Usage
Syntax
Inputs
Options
Description
Usage
The nzstop command is a script that initiates a system stop by halting all
processing. Any queries in process are aborted and rolled back. Queries and
processes typically stop very quickly.
Note: You must run nzstop while logged in as a valid Linux user such as nz on
the host. You cannot run the command remotely.
Inputs
Options
Description
Usage
Syntax
Inputs
Options
Description
Usage
Dataslice Issues :
Syntax
Inputs
Usage
CAUTION:
Do not run these commands unless explicitly directed to do so by Netezza
Customer Service. Running these commands without supervision can result in
system crashes, data loss, or data corruption.
The following table describes some of the more common commands in the bin/adm
directory. These commands are divided into the following categories:
Safe Running the command causes no damage, crashes, or unpredictable
behavior.
Unsafe
Running the command with some switches can cause no harm, but with
other switches can cause damage.
Dangerous
Running the command can cause data corruption or a crash.
These commands are unsupported commands and they have not been as
rigorously tested as the user commands.
Table A-46. Diagnostic commands
Command Usage Description
nzconvertsyscase Unsafe Converts the Netezza system to the opposite case,
for example, from upper to lowercase. For more
information, see “The nzconvertsyscase command”
on page A-70.
nzdumpschema Safe Dumps a database schema and some statistics
information. This command is useful when you are
attempting to understand a class of query
optimization issues. For more information, see “The
nzdumpschema command” on page A-71.
nzlogmerge Safe Merges multiple system log files in to a
chronological sequence. For more information, see
“The nzlogmerge command” on page A-73.
nzdbg Unsafe Enables system diagnostic messages. Although many
invocations of this command are safe, some
invocations can cause your system to crash.
nzdumpcat Unsafe Dumps the system catalog information. This
command can damage the system catalog if used
carelessly.
nzdumpmem Unsafe Dumps various database shared-memory data
structures. Although many invocations of this
command are safe, some invocations can cause your
system to crash.
Your database must be offline when you use this command (that is, use nzstop
first to stop the system).
Syntax
Important: You must specify either -l or -u. If you do not specify either option,
the command displays an error. After you convert your system, you must rebuild
all views and synonyms in every database.
Description
Note: If you want to convert the identifier case within a database to the
opposite of the default system case, contact Netezza Support.
Usage
Note: Because no actual data is dumped, you cannot use this command to back up
a database.
Description
Privileges required
You must be the admin user to run the nzdumpschema command.
Common tasks
Use the nzdumpschema command to dump the table and view definitions,
the database statistical information, and optionally, any UDXs that are
registered within the database. It is a diagnostic tool that you can use to
troubleshoot various problems that relate to a query. The nzdumpschema
command is a troubleshooting tool and should not be used on a regular
basis in a production system. The command requires significant memory
resources to run, and if used at the same time as other memory-consuming
commands such as nzbackup, the host could run out of memory and
restart.
v You must run it from the host IBM Netezza system.
v You cannot use -u, -pw, -host, or other nz CLI options.
v You must set the NZ_USER and NZ_PASSWORD environment variables.
However, the command ignores the NZ_SCHEMA setting.
v You must specify a database.
v If the database includes registered user-defined objects (UDXs), you can
also dump copies of the object files that were registered for use with
those routines.
Usage
To merge all the log files, the nzlogmerge command syntax is:
nzlogmerge list of files to merge
Syntax
The host manages the other Netezza components and provides support as an
administration monitor, which you can use to manage Netezza functions. The host
also converts queries into optimized execution plans, specifically by using the
strengths of the Netezza architecture.
Most Netezza models are high availability (HA) systems and have two hosts; one
host serves as the active host and one as the standby host, which takes over when
the active host encounters problems or is manually shutdown. Any changes that
you make to the Linux configuration of one host, such as adding Linux users or
groups, managing crontab schedules, or changing NTP settings, you must also
make to the second host to ensure that the hosts have identical configurations. It is
important to make these changes correctly on both hosts to ensure that the
environments are the same.
This section describes some of the common Linux procedures. For more
information, see the Red Hat documentation.
Related concepts:
Chapter 11, “Security and access control,” on page 11-1
This section describes how to manage IBM Netezza database user accounts, and
how to apply administrative and object permissions that allow users access to
databases and capabilities. This section also describes user session controls such as
row limits and priority that help to control database user impacts on system
performance.
Linux accounts
You can create Linux user accounts to manage user access to the IBM Netezza
system.
Accounts can refer to people (accounts that are associated with a physical person)
or logical users (accounts that exist for an application so that it can perform a
specific task). The system assigns a user ID to every file that a user account creates.
Associated with each file are read, write, and execute permissions.
Note: A Linux user or group does not have Netezza database access.
Related concepts:
“Netezza database users and user groups” on page 11-1
To access the IBM Netezza database, users must have Netezza database user
accounts.
This useradd command creates a user that is called kilroy and also a user private
group called kilroy. It also adds user kilroy to the staff, admin, and dev Linux
groups.
For Netezza HA systems, after you add the Linux account on one host, you must
create exactly the same user account with the same userid value on the other host.
To create Linux users in a Netezza HA environment, use the following procedure:
1. Log in to Netezza host 1 as the root or super user.
2. Use the useradd command to create your new Linux user:
[root@nzhost-h1 ~]# useradd -p usr_test usr_test
3. Confirm the userid and groupid for the new user account:
[root@nzhost-h1 ~]# grep usr_test /etc/passwd
usr_test:x:501:501::/home/usr_test:/bin/bash
The first number after the x is the userid and the second number is the
groupid.
4. Connect to Netezza host 2 as the root or super user.
5. Use the useradd command to create your new Linux user with the same userid:
[root@nzhost-h2 ~]# useradd -p usr_test -u 501 usr_test
6. Confirm that the account on host 2 has the same user name, userid, and
groupid:
[root@nzhost-h2 ~]# grep usr_test /etc/passwd
usr_test:x:501:501::/home/usr_test:/bin/bash
This usermod command changes the groups to which the kilroy user belongs.
This userdel command deletes the kilroy account, but does not delete kilroy's
home directory.
Linux groups
Groups are logical expressions of an organization. Groups relate certain users and
give them the ability to read, write, and execute files, which they might not
directly own. You can create and use Linux groups to associate certain users that
have similar permissions.
For Netezza HA systems, after you add the Linux group on one host, you must
create exactly the same group with the same groupid value on the other host. To
create Linux groups in a Netezza HA environment, use the following procedure:
1. Log in to Netezza host 1 as the root or super user.
2. Use the groupadd command to create your new Linux group:
[root@nzhost-h1 ~]# groupadd staff
3. Confirm the groupid for the new group:
[root@nzhost-h1 ~]# grep staff /etc/group
staff:x:501:
The first number after the x is the groupid.
4. Connect to Netezza host 2 as the root or super user.
5. Use the groupadd command to create your new Linux group with the same
groupid:
[root@nzhost-h2 ~]# groupadd -g 501 staff
6. Confirm that the group on host 2 has the same groupid:
[root@nzhost-h2 ~]# grep staff /etc/group
staff:x:501:
This groupdel command deletes the staff group. You cannot remove a user's
primary group. You must first remove the user or change the user's primary group.
The following sections explain some of the tasks you might perform.
CAUTION:
Do not follow the general Linux steps to change the host name or IP address
because the changes can result in split-brain or similar HA problems and system
downtime.
The -r switch causes a reboot. You can specify either the word now or any time
value. You can use the -h switch to halt the system. In that case, Linux also powers
down the host if it can.
If you have a IBM Netezza HA system, use caution when you are shutting down a
host. Shutting down the active host causes the HA software to fail over to the
standby host to continue Netezza operations, which might not be what you
intended.
CAUTION:
Never reformat the host disks. Doing so results in loss of data and corruption of
the file system. If you are experiencing errors, see the following section and
contact Netezza Support.
To run the fsck command manually, enter the following command, where x
specifies the disk partition number:
fsck /dev/hdax
Answer yes to all prompts. The goal is to recover metadata, but some data might
be lost. The fsck command should return your system to a consistent state; if not,
contact IBM Netezza Support.
Restriction: Do not use fsck to repair mounted partitions. If you are trying to
repair a mounted partition, you must first unmount the partition by using the
umount command, or you must boot the host from the emergency repair CD (the
installation has a repair mode) to fix a partition such as the root partition (/).
The top command displays real-time information about CPU activity and lists the
most CPU intensive tasks on the system. The system updates the display every 5
seconds by default.
v To display system CPU utilizations, enter: top
v To update the display every 10 seconds, enter: top -d -10
The difference between these two commands is that you run the kill command
with a process number, and you run the killall command with a process name.
CAUTION:
Never use the Linux kill commands to stop an IBM Netezza database user
session or an nz command process. Killing sessions or Netezza processes can
cause unwanted results such as loss of data or Netezza software restarts. Instead,
use the nzsession abort command to about sessions, and use the documented
commands such as nzstop to stopNetezza services.
With both commands, you can specify which type of signal to send to stop the
task. An application can intercept various types of signals and keep running,
except for the kill signal (signal number 9, mnemonic SIGKILL). Any UNIX system
that receives a SIGKILL for a process must stop that process without any further
action to meet POSIX compliance (if you own the task or you are root). Both the
kill and killall commands accept the signal number as a hyphen argument.
To stop the loadmgr process (number 2146), you can use any of the following
commands:
v kill -9 2146
v killall -KILL loadmgr
v kill -SIGKILL 2146
v killall -9 loadmgr
Note: When you kill a process with the kill signal, you lose any unsaved data for
that process.
The sample date command sets the date to June 1, 2009, 1:30 AM EST.
Find files
You can use several commands to locate files, commands, and packages:
locate string
Locates any file on the system that includes the string within the name.
The search is fast because it uses a cache, but it might not show recently
added files.
find -name *string*
Finds any file in the current directory, or below the current directory, that
includes string within the name.
which command
Displays the full path for a command or executable program.
rpm -qa
Lists all the packages that are installed on the host.
Common commands include more and editors such as vi. The less command
offers a powerful set of features for viewing file content, and can even display
non-text or compressed files such as the compressed upgrade logs. The view
command is a read-only form of the vi command and has many features for file
viewing.
As a best practice, do not use a file editor such as vi to view active log files such
as /var/log/messages or the pg.log file. Since vi opens the file for viewing or
editing, the locking process can block processes that are writing to the log file. Use
commands such as more or less instead.
Miscellaneous commands
You can use the following commands for system administration:
nohup command
Runs a command immune to hangups and creates a log file. Use this
command when you want a command to run no matter what happens
with the system. For instance, use this command if you want to avoid
having a dialup, VPN timeout, or a disconnect network cable cancel your
job.
unbuffer command
Disables the output buffering that occurs when the program's output is
redirected. Use this command when you want to see output immediately.
UNIX systems buffer output to a file so that a command can seem hung
until the buffer is dumped.
colrm [startcol [endcol]]
Removes selected columns from a file or stdin.
split Splits a file into pieces.
User views
The following table describes the views that display user information. To see a
view, users must have the privilege to list the object.
Table C-1. User views
View name Description Output Ordered by nzsql
_v_aggregate Returns a list of all objid, Aggregate, Owner, Aggregate \da
defined aggregates CreateDate
_v_database Returns a list of all objid, Database, Owner, Database \l
databases CreateDate
_v_datatype Returns a list of all objid, DataType, Owner, DataType \dT
system data types Description, Size
_v_function Returns a list of all objid, Function, Owner, Function \df
defined functions CreateDate, Description, Result,
Arguments
_v_group Returns a list of all objid, GroupName, Owner, GroupName \dg
groups CreateDate
_v_groupusers Returns a list of all users objid, GroupName, Owner, GroupName, \dG
of a group UserName UserName
_v_index Returns a list of all user objid, IndexName, TableName, TableName, \di
indexes Owner, CreateDate IndexName
_v_operator Returns a list of all objid, Operator, Owner, Operator \do
defined operators CreateDate, Description,
oprname, oprleft, oprright,
oprresult, oprcode, and oprkind
_v_procedure Returns a list of all the objid, procedure, owner, Procedure
stored procedures and createdate, objtype, description,
their attributes result, numargs, arguments,
proceduresignature, builtin,
proceduresource, sproc, and
executedasowner
_v_relation_column Returns a list of all objid, ObjectName, Owner, ObjectName, and
attributes of a relation CreateDate, ObjectType, attnum, attnum
(table, view, index.) attname,
format_type(attypid,attypmod),
and attnotnull
_v_relation_column_def Returns a list of all objid, ObjectName, Owner, ObjectName, and
attributes of a relation CreateDate, Objecttype, attnum, attnum
that have defined attname, and adsrc
defaults
_v_sequence Returns a list of all objid, SeqName, Owner, and SeqName \ds
defined sequences CreateDate
_v_session Returns a list of all active ID, PID, UserName, Database, ID \act
sessions ConnectTime, ConnStatus, and
LastCommand
System views
The following table describes the views that display system information. You must
have administrator privileges to display these views.
Table C-2. System views
View name Description Output Ordered by nzsql
_v_sys_group_priv Returns a list of all GroupName, ObjectName, DatabaseName, \dpg
defined group privileges DatabaseName, Objecttype, GroupName, and <group>
gopobjpriv, gopadmpriv, ObjectName
gopgobjpriv, and
gopgadmpriv
_v_sys_index Returns a list of all objid,SysIndexName, TableName, and \dSi
system indexes TableName, and Owner SysIndexName
_v_sys_priv Returns a list of all user UserName, ObjectName, DatabaseName, and \dp <user>
privileges. This list is a DatabaseName, aclobjpriv, ObjectName
cumulative list of all acladmpriv, aclgobjpriv, and
groups and user-specific aclgadmpriv
privileges.
_v_sys_table Returns a list of all objid, SysTableName, and SysTableName \dSt
system tables Owner
_v_sys_user_priv Returns a list of all UserName, ObjectName, DatabaseName, \dpu <user>
defined user privileges DatabaseName, ObjectType, UserName, and
uopobjpriv, uopadmpriv, ObjectName
uopgobjpriv, and
uopgadmpriv
_v_sys_view Returns a list of all objid, SysViewName, and SysViewName \dSv
system views Owner
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the web at "Copyright and
trademark information" at www.ibm.com/legal/copytrade.shtml.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other
names may be trademarks of their respective owners.
Red Hat is a trademark or registered trademark of Red Hat, Inc. in the United
States and/or other countries.
D-CC, D-C++, Diab+, FastJ, pSOS+, SingleStep, Tornado, VxWorks, Wind River,
and the Wind River logo are trademarks, registered trademarks, or service marks
of Wind River Systems, Inc. Tornado patent pending.
Applicability
These terms and conditions are in addition to any terms of use for the IBM
website.
Personal use
You may reproduce these publications for your personal, noncommercial use
provided that all proprietary notices are preserved. You may not distribute, display
or make derivative work of these publications, or any portion thereof, without the
express consent of IBM.
Notices D-3
Commercial use
You may reproduce, distribute and display these publications solely within your
enterprise provided that all proprietary notices are preserved. You may not make
derivative works of these publications, or reproduce, distribute or display these
publications or any portion thereof outside your enterprise, without the express
consent of IBM.
Rights
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
Index X-3
history (continued) host (continued) Kerberos tickets
FORMAT_TABLE_ACCESS() host table 16-1 cache location 11-26
function 14-32 installation directory 1-2 keystore for SEDs 6-4
helper functions 14-31 Host CPU Table, nzstats 16-1, 16-3 kill command B-5
load settings 14-7 Host Filesystem Table, nzstats 16-1, 16-3 killall command B-5
loading history data 14-7 Host Interfaces Table, nzstats 16-1, 16-4 kit
LOADINTERVAL 14-7 Host Mgmt Channel Table, nzstats 16-1, link 1-2
LOADMAXTHRESHOLD 14-7 16-4 optimized 1-2
LOADMINTHRESHOLD 14-7 Host Net Table, nzstats 16-1, 16-5 rev 1-2
plan data 14-5 Host Table, nzstats 16-1, 16-6 krb5.conf file 11-24
query data 14-5 HTTPS port 443, opening 9-8 krb5.keytab file 11-25
service data 14-5 HTTPS verification 9-17
state data 14-5 HW Mgmt Channel Table, nzstats 16-1
table data 14-5
tables 14-14
hwDiskFull event type 8-7
hwFailed event type 8-7
L
LD_LIBRARY_PATH 2-6
CLI usage 14-27 hwHeatThreshold event type 8-7
LDAP authentication 11-4, 11-18
column access history 14-19 hwNeedsAttention event type 8-7
about 11-19
end of query table 14-24 hwPathDown event type 8-7
commands 11-27
failed authentication hwRestarted event type 8-7
failures 11-19
attempts 14-20 hwServiceRequested event type 8-7
returning to local
log entries for operations 14-21 hwThermalFault event type 8-7
authentication 11-20
overflow of query string 14-25 hwVoltageFault event type 8-7
LDAP server
plan history at beginning of
managing 11-19
plan 14-23
required information for
plan history at end of plan
execution 14-22
I Netezza 11-19
IBM Security Key Lifecycle Manager, LDAP, configuring SSL security 11-21
schema version 14-31
setup 6-4 ldap.conf file, editing for SSL
session termination 14-28
IBM Support servers, accessing 9-8 configuration 11-21
source Netezza system 14-21
identifiers, in CLI 3-4 ldap.conf.orig file 11-20
start of query data 14-25
inactive hardware 5-8 less command B-7
state changes 14-30
incompatible hardware 5-8 limit clause 11-36
table access history 14-30
incremental backup restore 13-32 Linux
views 14-14
indirect object privileges 11-17 accounts B-1
history configuration
inherited privileges 11-17 adding groups B-3
altering 14-11
initial configuration 1-1 boot directories 1-2
changing 14-11
initialized, system state 7-4 changing passwords B-3
display settings 14-12
initializing, system state 7-4, 7-10 command line editing B-8
show settings 14-12
insert privilege 11-11 common procedures B-1
history data 14-1
installation, Netezza 1-1 deleting accounts B-2
database types 14-1
integer deleting groups B-4
database versions 14-2
nzDbosSpill file 7-17 directories, displaying B-7
files 14-3
transaction id 12-5 file content, displaying B-7
loading process 14-2
interfaces 1-8 files, finding B-7
log directories 14-4
invalid state 5-10 groups B-3
managing collection of 14-9
inventory reporting log files, viewing B-7
processes 14-2
callhome 9-11 miscellaneous commands B-8
setup 14-4
IP addresses, for HA system 4-15 modifying accounts B-2
staging process 14-2
ISKLM, setup 6-4 modifying groups B-4
starting collection of 14-11
ismainplan 14-23 passwords 11-19
stopping collection of 14-11
rebooting B-4
history database 14-1
release level B-6
changing ownership 14-10
creating 14-5 J remote access 1-8
setting up accounts B-1
dropping 14-10 job
statistics B-5
maintaining 14-13 examples 15-29
stopping processes B-5
ownership, changing 14-10
string matching B-7
history database types 14-1
system errors B-5
history database versions 14-2
history-data events 8-30
K system time B-6
Kerberos authentication 11-4, 11-18, timing commands B-8
history, sql sessions 3-9
11-27 user 1-1
home directory 1-2
about 11-22 viewing statistics B-5
host
commands 11-27 Linux users, adding B-1
host CPU table 16-1
configuring 11-23 Linux-HA
host filesystem table 16-1
testing 11-26 about 4-1
host interfaces table 16-1
updating 11-26 active host, identifying 4-5
host management channel table 16-1
Kerberos configuration file 11-24 administration 4-2
host net table 16-1
Kerberos keytab file 11-25 failover 4-1
Index X-5
nzlogmerge command nzstats command (continued) policy, configuring NetBackup 13-35
description A-69 SPU Partition Table 16-10 ports, numbers 2-10
syntax A-73 SPU Table 16-10 postgres, description 7-8
nzlogmerge.info command A-69 System Group 16-10 postmaster, description 7-8
nzmakedatakit command A-69 Table Table 16-11 preonline, system state 7-4
nznpssysrevs command nzstop command preonlining, system states 7-10
description A-43 arguments 7-7 preptime 14-23
nzpassword command 2-13, 3-1, A-1 description 3-1, A-1, A-63 prioritized query execution 15-24, 15-25
nzpassword command, storing example 7-7 priority
passwords 2-14 nzsystem command assigning to jobs 15-24
nzpassword, command A-43 description 3-1, A-1 example 15-29
nzpush command, description A-69 system configuration file A-65 levels 15-25
nzreclaim command A-1 nzvacuumcat, description 7-8 nzadmin tool A-49
description 3-1 nzzonemapformat command A-68 plans 15-25
nzresetxlog command A-69 privileges
nzresolv service 1-5 about 11-9
nzrestore command
description 3-1, A-1, A-47
O backup 13-15
client session 11-17
object privileges
overview 13-22 create table 13-21
description of 11-11
syntax 13-23 database statistic 11-17
security model 11-9
nzrev command 7-1 displaying 11-12
ODBC
description 3-1, A-1, A-47 indirect 11-17
setting logs 11-39
rev 7-1 log on 11-17
offlining, system state 7-4
nzscratch directory 1-2 nzcontents A-4
ok state 5-10
nzsession command nzrev A-4
ON_ERROR_STOP 3-9
arguments 12-23 nzstart A-4
online state 5-10
changing priority 15-29 nzstop A-4
online, system state 7-4, 7-10
description 3-1, A-1, A-49 object privileges 11-11
operators, runaway query 8-24
examples 12-25 restore 13-28
opid 14-23
viewing 12-23 transaction 11-17
OrExpr event rule 8-11
nzspupart command, description A-54 procedure, privilege 11-15
organization percentage 12-22
nzsqa command A-69 processes
organizing key 12-12
nzsql command displaying B-5
overserved group 15-16
description 3-1, A-1, A-56 stopping on Linux B-5
managing database 3-5 ps command B-5
managing transactions 12-25 public group 1-1, 11-3
ON_ERROR_STOP 3-9 P public views, system 11-40
resource control file 3-9 pages, definition 12-3
session history 3-9 pam_cracklib dictionary 11-6
sessions 12-23
slash commands 3-10
pam_cracklib utilities 11-5
password
Q
qcrestart 14-23
nzstart command admin user 1-1
Query History Table
arguments 7-6 authentication, local 11-19
_v_qryhist 12-30
description 3-1, A-1, A-56 clear-text 2-13
nzstats 16-1, 16-9
nzstate command encrypted 2-13
Query Table
arguments 7-2 history 11-7
_v_qrystat 12-30
description 3-1, A-1 nz user 1-1
nzstats 16-1, 16-8
errors NZ_PASSWORD 2-14
queuetime 14-23
offline system 7-2 nzpassword command 2-13
nzstats command reuse 11-7
_v_qryhist 12-30 specifying length 11-28
_v_qrystat 12-30 storing for Netezza users 2-14 R
Database Table 16-2 password content controls 11-5 random distribution, benefits 12-10
DBMS Group 16-2 password expiration 11-35 rebooting Linux B-4
description 3-1, A-1 PASSWORDEXPIRY setting 11-35 recoverable internal error 7-11
Hardware Management Channel patch release 7-1 Red Hat 1-8
Table 16-7 paused, system state 7-4 redirecting restore 13-40
Host CPU Table 16-3 Per Table Per Data Slice Table, regen events 8-26
Host Filesystem Table 16-3 nzstats 16-1, 16-8 regeneration
Host Interfaces Table 16-4 permissions, backup 13-21 manual 5-27
Host Mgmt Channel Table 16-4 pingd command 4-2 regenError event type 8-7
Host Network Table 16-5 planid 14-23 regenFault event type 8-7
Host Table 16-6 plans, directory 1-2 registry settings
overview 16-1 Pluggable Authentication Module (PAM), changing 7-19
Per Table Per Data Slice Table 16-8 for LDAP 11-4, 11-18 displaying 7-18
Query History Table 16-9 PMRs registry, configuration 7-18
Query Table 16-8 enabling with callhome 9-10 release descriptions 7-1
Index X-7
system states (continued) troubleshooting views (continued)
online 7-4, 7-10 enhanced cryptography support 10-7 _v_sys_view C-2
paused 7-4 truncate _v_table C-1
pausing 7-4 privilege 11-11 _v_table_dist_map C-1
preonline 7-4 tzoffset 14-23 _v_table_index C-1
preonlining 7-10 _v_user C-1
resuming 7-10 _v_usergroups C-1
stopped 7-4
types 7-4
U _v_view C-1
system 11-40, C-2
uname command B-6
system temperature, events 8-29 voltage fault events 8-33
underserved group 15-16
system time, changing B-6
unfence privileges 11-10
System view 3-14
uninstalling Windows tools 2-8
systemStuckInState event type 8-7
UNIX Netezza clients W
installing 2-2 WildcardExpr event rule 8-11
removing 2-6 Windows tools 2-6
T unreachable state 5-10 WLM
table storage, about 12-3 update privilege 11-11 See workload management
Table Table, nzstats 16-1, 16-11 user accounts workload management 15-1
table-oriented zone maps 12-19 encrypting passwords for 2-13 compliance 15-16
tables passwords, storing 2-14 compliance reports 15-17
base tables 12-10, 12-11 user names, matching Netezza and GRA 15-8
grooming 12-20 LDAP 11-19 overserved and underserved
intra-session tables 12-10, 12-11 user privilege 11-15 groups 15-16
privilege 11-15 useradd command B-1 PQE 15-24
record header 12-4 users prioritized query execution 15-24
special fields 12-4 methods for managing 11-1 priority 15-25
table table 16-1 Netezza database 11-1 priority levels 15-25
tuning 12-5 rowset limits 11-36 resource
Telnet, remote access 1-8 superuser, Netezza 11-3 allocation 15-10
temperature events unlocking 11-28 maximum 15-10
hardware 8-28 minimum 15-10
system 8-29 percentage 15-10
template event rules 8-1
TFTP
V settings 15-10
short query bias 15-22
vacuum analyze, see generate
bootsvr 7-8 system resource allocation 15-10
statistics 12-15
power up 7-10 techniques 15-1
varchar, data type 12-4
threshold
variables, environment 2-8
disk space 8-22
variant release 7-1
example 8-22
time command B-8
Veritas NetBackup 13-34 X
version, software 7-1 xid 14-23
time with time zone, data type 12-4
versions of history database 14-2 xinetd, remote access 1-8
time, disk usage 12-4
view command B-7
timestamp, temporal type 12-4
View, privilege 11-15
Tivoli Storage Manager
encrypted backups 13-43
viewing
sessions A-49
Z
tls_cacertfile option 11-21 zone map format 12-19
system logs 7-12
tls_cert option 11-21 zone maps
views
tls_key option 11-21 automatic statistics 12-16
_v_aggregate C-1
tmp, directory 1-2 changing formats A-68
_v_database C-1
top command B-5
_v_datatype C-1
topology balance
_v_function C-1
monitoring 8-35
_v_group C-1
Topology Imbalance event 8-36
_v_groupusers C-1
TopologyImbalance event 8-35
_v_index C-1
toporegen command, description A-69
_v_operator C-1
totalsnippets 14-23
_v_qryhist 12-30
transaction ID, overview 12-5
_v_qrystat 12-30
transaction objects
_v_relation_column C-1
monitoring 8-33
_v_relation_column_def C-1
TransactionLimitEvent event 8-33
_v_sequence C-1
transactionLimitEvent event type 8-7
_v_session C-1
transactions
_v_sys_group_priv C-2
examples 12-25
_v_sys_index C-2
managing 12-25
_v_sys_priv C-2
nzsql 12-25
_v_sys_table C-2
privilege 11-17
_v_sys_user_priv C-2
system limit 15-29
Printed in USA