[go: up one dir, main page]

0% found this document useful (0 votes)
28 views94 pages

O2o Workshop

The O2O migration services workshop documentation provides a comprehensive guide for Oracle customers to migrate their databases efficiently, particularly for SAP systems. It outlines the migration process, including preparation, execution, and post-migration steps, while emphasizing the advantages of using O2O over other methods. Key features include automated scheduling, direct database upgrades, and the ability to optimize database layouts and sizes during migration.

Uploaded by

sandeepfromooty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views94 pages

O2o Workshop

The O2O migration services workshop documentation provides a comprehensive guide for Oracle customers to migrate their databases efficiently, particularly for SAP systems. It outlines the migration process, including preparation, execution, and post-migration steps, while emphasizing the advantages of using O2O over other methods. Key features include automated scheduling, direct database upgrades, and the ability to optimize database layouts and sizes during migration.

Uploaded by

sandeepfromooty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

O2O workshop documentation

Oracle Support Services


SC SAP Support Services
Oracle Solution Center SAP

Documentation for Workshop ’O2O migration services’


Version 1

© Copyright 2010 Oracle Corporation All rights Reserved

Primary Author: Dr. Stephan Bühne

Contributors:

The following documentation is the official handout for our workshop

’O2O migration services’

The documentation is written from a practical point of view. If you find any errors
or if you have any additional comments, feel free to send an email to:

stephan.buehne@oracle.com

All products and company names are used for identification purpose only, and may be
trademarks of their respective owners.
Table Of Contents
1 Introduction......................................................................................................................................5
2 Why using O2O................................................................................................................................6
3 O2O overview..................................................................................................................................7
4 O2O architecture overview...............................................................................................................7
4.1 Used Oracle methods................................................................................................................7
5 O2O Migration Steps........................................................................................................................9
5.1 Choose the correct package version..........................................................................................9
5.2 Preparation of the source system..............................................................................................9
5.2.1 Oracle network configuration.........................................................................................10
5.2.2 File system requirements ...............................................................................................10
6 Installing and executing the PL/SQL package................................................................................13
6.1 Installing the package software...............................................................................................13
6.2 Defining PL/SQL package parameters...................................................................................14
7 Setting up the target system............................................................................................................16
7.1 Setting up tnsnames................................................................................................................16
7.2 Setting up the scheduler..........................................................................................................17
7.3 Database parameters for the migration...................................................................................17
7.4 Migration scenario: Using remote export ..............................................................................18
7.5 Analysis and migration script generation ...............................................................................18
7.6 Migration architecture.............................................................................................................23
7.7 Scheduling software................................................................................................................24
7.8 Migration script conversion....................................................................................................26
7.9 Creation of the target database...............................................................................................28
7.9.1 Common scheduler checks..............................................................................................31
7.9.2 Scheduler status: Preparation phase................................................................................32
7.10 Starting the perform migration phase...................................................................................34
7.11 Monitoring the migration......................................................................................................35
7.11.1 Database monitoring.....................................................................................................35
7.11.2 Scheduler monitoring....................................................................................................36
7.12 Changing the scheduler run-time parameters.......................................................................39
7.12.1 Scheduler sleep time.....................................................................................................39
7.12.2 Parameters which are related to export/ import............................................................39
7.12.3 Parameters related to tables...........................................................................................40
7.12.3.1 Hints for setting table related scheduler parameters.............................................41
7.12.4 Parameters related to index creation.............................................................................42
7.12.4.1 Hints for setting index related scheduler parameters............................................42
7.12.5 Parameters related to table checks................................................................................43
7.12.5.1 Hints for setting verify and statistics related scheduler parameters......................43
7.13 Check phase of the migration...............................................................................................43
7.14 Post migration steps..............................................................................................................45
7.15 Final migration steps.............................................................................................................46
7.16 Creating migration evidence.................................................................................................47
7.17 Preparing customer documentation......................................................................................48
8 Other scheduler parameters............................................................................................................49
9 Different migration scenario...........................................................................................................49
9.1 Using local export...................................................................................................................50
9.2 Changing the Oracle release...................................................................................................51
9.3 Changing the database <SID>................................................................................................52
9.4 Changing schema names.........................................................................................................52
9.5 Excluding users from the migration .......................................................................................52
Workshop documentation O2O migrations

9.6 Changing the tablespace layout..............................................................................................53


9.6.1 Specifying a different default tablespace name .............................................................54
9.7 Moving single tables to a specific tablespace.........................................................................54
9.8 Enabling index compression...................................................................................................55
9.9 Enabling table compression....................................................................................................55
9.10 Automatic interval range partitioning...................................................................................56
9.11 Secure files in 11g.................................................................................................................57
9.11.1 Secure files compression...............................................................................................57
9.11.2 Long To Lob conversion...............................................................................................57
9.12 Enabling tablespace encryption............................................................................................58
9.13 Using Oracle DataPump.......................................................................................................58
9.13.1 Creating a full datapump export....................................................................................60
9.13.1.1 Errors when using NFS for DataPump on AIX (Metalink# 420582.1).................61
9.13.1.1.1 Cause.............................................................................................................61
9.14 Optimizing target database size............................................................................................62
9.14.1 Using the autoextend feature.........................................................................................62
9.14.2 Setting the maximum data file size...............................................................................63
9.14.3 Specifying a minimum data file size.............................................................................63
9.15 Optimizing network usage....................................................................................................63
9.16 Configuring Session Data Unit.............................................................................................64
9.16.1 Clientside Configuration...............................................................................................65
9.16.2 Serverside Configuration..............................................................................................65
9.17 Configuring I/O Buffer Space...............................................................................................66
9.17.1.1 Determining Bandwidth Delay Product................................................................67
9.17.1.2 Clientside Configuration.......................................................................................67
9.17.1.3 Serverside Configuration.......................................................................................67
10 Enhanced O2O functionality........................................................................................................68
10.1 Using MCP transfer method for large tables........................................................................68
10.1.1 Running MCP migration...............................................................................................70
10.1.2 Recommendation for handling large objects.................................................................70
10.1.3 Known limitations.........................................................................................................71
10.2 Running a migration on the same host..................................................................................71
10.2.1 Approach 1: Modify package script files......................................................................72
10.2.2 Approach 2: Modify server side files............................................................................72
10.3 Running the scheduler on a remote host...............................................................................73
10.4 Running scheduler in Demo-Mode.......................................................................................74
11 Known limitations........................................................................................................................74
11.1 Problems with SAP transaction DB02..................................................................................74
11.2 Database links.......................................................................................................................75
11.3 Unsupported objects..............................................................................................................75
11.4 Oracle 11g.............................................................................................................................75
11.5 Scheduler support for Datapump..........................................................................................75
11.6 Lob Segments........................................................................................................................75
11.7 Excluded Usernames.............................................................................................................76
11.7.1 Defined users to exclude...............................................................................................76
11.7.2 Defined roles to exclude................................................................................................77
12 Appendix A: Package parameters.................................................................................................79
12.1 Package parameters for setting up and running the package:...............................................79

© Oracle SAP Solution Center Walldorf Page 3 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

12.2 Parameters influencing the set up of the target database......................................................80


12.3 Parameters influencing the migration process......................................................................87
12.4 Parameters used for splitting very large tables.....................................................................93
12.5 Parameters for Non-SAP systems.........................................................................................94

© Oracle SAP Solution Center Walldorf Page 4 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

1 Introduction
The O2O migration package was intended to offer Oracle customers a fast, cheap and reliable
method to migrate their Oracle databases either to switch to another operating system, to combine
an Oracle software upgrade with a complete database reorganization or to simple reorganize and
shrink the database after broad archiving activities.
Although O2O was originally designed for the special needs of SAP systems, it can be used
meanwhile also for NON-SAP systems. Within a SAP migration O2O replaces the part of the
R3Load tool, which performs the database migration. Once the O2O migration is completed, you
have to perform the same post migration steps as for a R3Load migration.
The O2O package includes the following key features beside other ones, not listed in this summary:
• Runs on all UNIX, Linux and Windows operating systems. Cross database migration
possible also between UNIX, Linux and Windows 32bit and 64bit is possible
• Direct database upgrade (8.1.7.4 and higher => 10g, 9.2.0.8 and higher => 11g) as part of
the migration possible
• Automated scheduling software to easier the migration
• Changing database user names incl. SAP schema name
• Migration to SAP single tablespace layout
• Creating an individual tablespace layout, also for legacy databases
• Automatic calculation of tablespace sizes
• Automatic generation of sub-directory structure for database data files
• Scripts for creating tablespaces, optional with autoextend feature to optimize database size
• Moving selected tables and indexes to individual tablespaces (defined in an Input-File)
• Excluding selected database users from migration (defined in an Input-File)
• Copying table structure only for selected tables, e.g. create size reduces database copies
• Resumable Transaction per Default (Job freeze 24h)
• All options can be combined in any order. The needed tablespace sizes on the target system
are calculated automatically
• Parallel copy method for large tables
• Import of external scripts to create customized migration scripts (partitioning)
• Simple LIST partitioning based on the column “MANDT” possible
• IOT-Support
• Enhanced partitioning support (sub partitioning, Global, Global partitioned and local
indexes)

© Oracle SAP Solution Center Walldorf Page 5 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• Support for DataPump export/import for tables having LOB or Long Raw datatypes
• Supports table sorting during migration
• Index compression on target system
• Table compression for 10g (BW systems only) and advanced compression for Oracle 11g on
target systems
• Transparent tablespace encryption (TDE)
• Enhanced migration checks based on row counts for each table on source and target system
All available features of the O2O migration package are controlled by parameters, used by a
PL/SQL package, executed on the source system.

2 Why using O2O


Very often the discussion starts, why to use O2O but not transportable tablespaces, RMAN or
R3Load for cross-platform migrations, or why not even copy the files, when you only change
hardware (e.g. storage). From a pure technical point of view a lot of different solutions are possible.
But O2O offers you the following advantages:
• When using for SAP migration, you don't need to run any SAP pre-migration tasks on the
source system. There is no need to remove temporary SAP tables or to run special BW
reports to prepare the system for the migration.
• A complete database reorganization on tables and indexes. From the past migrations we
know, that you can save up to 30% of space, only by reorganizing your database.
• O2O is fast. With the appropriate hardware more than 1TB/h can be copied. So compared to
a file system copy O2O will be even faster very often, because copying data files over a
network is not really fast.
• Using all the features a current database release offers you, without the hazard of complex
reorganization of tablespaces
• You can easily introduce the newest Oracle database technologies, like index and table
compression or transparent data encryption for sensitive data.
• O2O only copies the table data and is therefore much faster than a normal copy, which also
has to copy UNDO and TEMP tablespace as well as the index data.
• You can optimize your new database to the current technology, reducing the number of
mount points and data files significantly.
• O2O is 100% risk free. Your source system is only read, so if the migration fails, you can
simply proceed with the old system.
• Copying only the data files (e.g. when using transportable tablespaces of file system copy),
your database stays on the new system as it is on the old system. You also keep your
database as it is now (e.g. small data files, high number of mount point, fragmented
tablespaces). If you ever want to clean up, you have to reorganize your database anyway.

© Oracle SAP Solution Center Walldorf Page 6 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

3 O2O overview
The O2O method is designed to be executed in the following steps:
1. Choose the appropriate package version for your migration.
2. Prepare the source system to generate the migration scripts and getting the activation key for
the package
3. Define the package parameters for the first run.
4. Analysis and migration script generation on the source system. This includes adaption and
fine tuning of the package parameter in further script generations.
5. Setup of the scheduling software on the target system
6. Creation of an empty target database with scripts, generated in step 4.
7. Execution of the migration either manually or with the provided migration scheduler
8. Post migration steps of the O2O package
9. Post migration steps within SAP (e.g. Printer setup, RFC destinations)
While the steps 1 up to 6 do not require a downtime, for the real migration process (steps 7 - 9) the
SAP or legacy application on the source system must be stopped. Based on the available hardware
ressources, a migration speed of more than 1 TB/h (measured in database growth on the target
system) is possible. Most loaded during the migration is CPU and memory on the target system. For
high speed migrations you also need sufficient network bandwidth. Most important for a fast
migration is the IO subsystem. If the IO subsystem is inadequate configured or if it is a low-cost
configuration with slow disk, the migration will take much longer to complete. In general on a
normal sized target system (4 CPU's, 16 GB RAM, SAN or NAS storage) the typical migration
throughput is ~ 300GB/h, by using standard settings for the package.
The package can only be used, if you have a valid activation key for the package. This activation
key must be created on the system you want to run the package.

4 O2O architecture overview

4.1 Used Oracle methods


The O2O method is using only Oracle standard methods, which are parts of the database engine.
The methods used are also named as “job classes” we use for the migration.
• CTAS => (“Create Table As Select” command) for normal (SAP = transparent
tables)
• CTASLOB => (“Create Table As Select” command) for tables having a LOB column
(SAP = Cluster and Pool tables using a LOB data type)
• PL/SQL => Tables having a LONG or LONG RAW column (SAP = Cluster and
Pool tables using a LONG data type)

© Oracle SAP Solution Center Walldorf Page 7 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• EXP/ IMP => Tables below the size setting, regardless which table type
• Datapump => Alternative method for CTAS and/or PL/SQL
A PL/SQL program loaded on the source system will separate the tables by size first. Those tables,
smaller than a predefined threshold, are included in export/ import scripts, regardless of the table
definition.
Tables, having a size beyond this threshold are migrated by separate scripts. The PL/SQL program
will choose for each table type, found on the database, the appropriate method to migrate this table.
The basic architecture is shown in the next picture.

Picture 1: O2O basic architecture

The migration itself is executed by a scheduling program (name scheduler), which executes all
generated scripts in the correct order. The scheduler is running normally on the target system (it's
possible to execute the scheduler also on a different host), executing all scripts on the target system.
Large tables are directly copied over the network, whereas the smaller tables are included into
export/ import files. Normally approx. 1% – 3 % of the database size is transported with export/
import. The threshold for the export/ import or direct path can be specified as a parameter for the
package.
For all objects, having separate scripts the package will generate also the scripts to create the
dependent objects (indexes, primary key constraints, foreign key constraints) on these objects. The
indexes will be created using the parallel query (PQ) option of the database. The degree of the
parallelism is computed automatically, to speed up the index creation. Before executing scripts, the
scheduler takes care about available resources on the target system.
As part of the migration databases are compared on object level ( existence of all migrated objects)
and more detailed, for the tables a row count on source and target database is performed.

© Oracle SAP Solution Center Walldorf Page 8 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

5 O2O Migration Steps

5.1 Choose the correct package version


Normally you will use the latest package version, to create your migration scripts. But as the
development goes on, also newer Oracle database versions were released using new features within
the database. For example, an Oracle 8i version neither have LMTS tablespaces, nor does it know
about the enhanced features of partitioning available in Oracle 9i or 10g. With the growing
functionality also the Oracle data dictionary structures have changed. So not all package versions
are capable to run on older Oracle versions
To install the package on the source system, during the compile time all tables and views accessed
in the package are checked against the database. If a view or a column is referenced, which is
unknown to the current database release, the package will fail to compile. Also a lot of PL/SQL
features, needed for the script generation is not available in the older Oracle releases. As a result,
the support for older Oracle releases are limited to the former package version, but these older
releases still support the higher Oracle version on the target. For the direct migration path always a
direct migration between two Oracle major releases are supported (e.g. 8i => 10g, 9i => 11g). The
older releases will lack all the new functionality available in the higher versions.
Direct migration paths are supported for the following Oracle versions. In brackets you find the
package version, supporting the Oracle source release:
8.1.7.4 => 10g (Package version 0.61x)
9.0.x => 10g (Package version 0.92x or version 1.x including Online capabilities)
9.2.0.8 => 11g (Package version 0.92x or version 1.x including Online capabilities)
10.2.0.4 => 11g (Package version 0.92x or version 1.x including Online capabilities)
Please note, that the direct migration path is limited by the Oracle export/ import utilities. The
source version is distinguished by the Oracle release specific data dictionary, which avoids a
successful installation of some O2O-package versions on the source system. Enhanced features (e.g.
tablespace transformation, TDE and others) are only available as of package version 0.9x. The
version 0.9.2x and 1.x differ only in the additional Online capabilities (Triple-O) of the 1.0 package.
If not specified different in your organization, you should use the latest package version., which
doesn't only include the latest fixes for recognized problems, but also new functionality, to support
new Oracle features (e.g. index compression, table compression, transparent data encryption),
which are added on demand to the package.

5.2 Preparation of the source system


Before starting with the preparation of the source system, you have to clarify a couple of questions:
• How are files between source and target system are shared? Is it possible to use a NFS file
system, or is it necessary to copy the files
• Which methods can be used to connect between source and target system?
• Is customer planning to run the same Oracle versions on source and target system?

© Oracle SAP Solution Center Walldorf Page 9 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

The package setup and the later migration is depending on these decisions made at the beginning.

5.2.1 Oracle network configuration


For accessing the database from a different host it might be necessary to configure the listener
process for an additional IP-address.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.50)(PORT = 1521))
)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SDU = 32767)
(ORACLE_HOME = /home/ora102/102_32)
(SID_NAME = V102)
)
)

This is an example for a second ip-adress in the listener.ora file.


IMPORTANT HINT!!!!
When using multiple ip-adresses ensure that you have the same name resolution on all hosts.
Keep in mind, that if a process requests a connection it sends the name for the desired connect
to the listener. The listener performs a name resolution for the received name/ ip-address and
sends back this name to the client software. If this name differs from the configured listener
entry, the connection request will fail at the end. Please use the utilities ping and tnsping for
testing the connection. If both are successful try to connect on the target system with the
command
“sqlplus system@SID_OLD” to the source system. If this connection works you can start with
the migration.You don't have to start, before this works.

5.2.2 File system requirements


The O2O package is using Oracle UTL_FILE capabilities to create and format the migration scripts.
The location in the file system is specified with database defined “directories” (Oracle version >=
9i), specifying for a logical Oracle directory name a physical location in the current file system. The
directories in the file systems must have read/write permissions to the user the Oracle database
software is executed. For the O2O basic functionality I recommend to create an own directory tree
for all the directories needed. The recommended standard definition is:
• O2O => the O2O basic directory
• O2O/SQL => contains the package software, start scripts, Key selection script
• O2O/LOG => is used for log information written by the O2O package
• O2O/OUTPUT => is used for the migration scripts

© Oracle SAP Solution Center Walldorf Page 10 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• O2O/MCP_OUT => is used for the migration scripts for MCP method
• O2O/INPUT => is used for input files, e.g. tablespace translation, user mapping
By default the package uses the following Oracle directory names. You can use the specified
package parameters to define you own Oracle directory names, if you want to use different ones.
Package parameter Default Oracle directory names File system destination
LOG_DIR O2O_LOG LOG directory

OUTPUT_DIR O2O_OUTPUT OUTPUT directory

INPUT_DIR O2O_INPUT INPUT directory

MCP_DIR O2O_MCP MCP Output directory

DUMPFILE_DIR O2O_OUTPUT Dumpfile directory

The directory “MCP_DIR” is optional. For a normal migration this directory doesn't need to be
specified. We will discuss this directory in detail, when the MCP method is described.
When creating the directories you have to calculate the needed amount of space needed for the
migration scripts as well as for the dumpfiles generated during the migration. You can calculate the
following space requirements for the migration:
• LOG directory: < 1 MB
• OUTPUT directory < 100 MB
• INPUT directory < 1 MB
• DUMPFILE_DIR > 10GB (depends on migration setup)
Note: You must create the directories for LOG, OUTPUT and INPUT path, otherwise the package
run will abort with a run time error.
Depending on the migration setup and the migration path, the Oracle export utility can either run on
the target machine (preferred setup) or on the source machine. Independent from the setup a
significant amount of space is needed for the dumpfiles created bei the export jobs. The amount of
space needed for dumpfiles is depending of the actual package parameter used. Normally you can
expect up to 5% of the database size for the size of the dumpfiles. So for a 1TB database approx..
10 - 30 GB of dumpfiles (1% - 3% of the database size)will be created. In a standard installation the
$ORACLE_HOME directory has only a few GB free space. It's also very dangerous to create the
dumpfiles in the $ORACLE_HOME directory. Very often also the alert.log file of the database is
created in the same file system. If the file system becomes full, the database would crash.
To overcome this situation there are two possible solutions:
1. Create the O2O directory tree on a different location in the file system, e.g an external file
system on a NFS directory. Important note: Don't create the O2O_OUTPUT directory on a
NFS share. We have seen there major problems with the scheduling software, when running
on NFS shares.

© Oracle SAP Solution Center Walldorf Page 11 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

2. To separate the OUTPUT directory, containing all migration scripts from the dumpfiles, you
can use the package parameter “DUMPFILE_DIR” to store the dumpfiles on a different file
system.. Because only DataPump can make use of Oracle directories, you have to set up a
symbolic link for the normal export jobs: Please specify on the O2O directory level the
following link: ln -s <path to new dump directory> DUMP
Once the parameter “DUMPFILE_DIR” is specified, the export expects this directory or link at
this location.
Note: The link name “../DUMP” is hardcoded in the package and can't be changed. So
either a directory or a link with this name must be in place.
To query existing directories run the following command:
set linesize 132
column owner heading “Owner” format a 15
column directory_name heading “Directory” format a 20
column directory_path heading “Path” format a 95
select owner, directory_name, directory_path
from dba_directories order by owner,directory_name;
You should query always the existing directories before creating a new one.
To create a new directory run the command:
CREATE DIRECTORY O2O_OUTPUT AS '/oracle/P01/O2O/OUTPUT';
If you want to overwrite an existing directory, e.g. to specify a different file system path use:
CREATE OR REPLACE DIRECTORY O2O_OUTPUT AS '/oracle/P01/O2O/OUTPUT';
You have to create all Oracle directories before you can run the package. For a successful package
run you need at least the directories OUTPUT, LOG, INPUT
The package will create the migration scripts in the OUTPUT directory, whereas the LOG directory
contains helpful information about the analysis process and findings during the package run. Each
package run will create it's own logfile in the LOG directory. Please have a look into this logfile
first, if there are questions regarding the last package run. If there are problems with the package,
we always need the logfile generated at the package run.
The INPUT directory is used to configure the package run, if the enhanced package functionality is
used. This directory must exist, but can be empty. We will discuss the structure and meaning of the
different configuration files, when introducing enhanced package functionalities.
To identify a migration script set uniquely, each scripts has a time stamp in its name. This time
stamp is used to identify the migration script members. It's not recommended to mix up scripts from
different script sets (the scheduling software will also not run, if there is one migration script set in
one location) Only for scripts, created with the same package run data consistency is guaranteed.
When mixing different migration script sets, because it would be possible that tables are either not
included in the migrations scripts, because the were created later or tables have multiple occurrence,
because the table size has changed in the mean time.
Once the Oracle directories are in place, you can create now a file (sel_key.sql), containing the
select commands to fetch the information needed to create a valid activation key for your system.
Please note, that the output of this script is case sensitive. The script must run on the database and
the host, you plan to use for the migration script generation.

© Oracle SAP Solution Center Walldorf Page 12 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

set serveroutput on
set linesize 250
set heading off
column HEAD1 fold_after
column DBID fold_after
column NAME fold_after
column HEAD2 fold_before
column HEAD3 fold_before
select '>> Data needed to generate the ActivationKey:' as HEAD1,
'>> DBID = ', dbid,
'>> NAME = ', name,
'>> CreationDate = ', to_char(created, 'DD-MM-YYYY HH24:Mi:SS'),
'>> VersionTime = ' as HEAD2, to_char(version_time, 'DD-MM-YYYY HH24:Mi:SS'),
'>> Hostname = ' as HEAD3, host_name
from v$database, v$instance;
Note:
If you are using a cluster system and your scripts will be created on another cluster node, either
request two keys, valid for each node. The O2O fee includes the keys within a clustered
environment. You use the output from the sql statement, but can replace the hostname with the
appropriate one. Please note that Oracle uses always the physical hostname, but not the virtual
cluster node name.
If you are running a RAC environment, make sure the script is running on the same RAC instance
and host you want to create the scripts. The activation key is only valid for a specific instance
name.
If you create new control files for the database, the migration key is also changed.
Because the keys must be created first, give us some time (2 working days) for creating and sending
the keys out. Normally it will be faster, but we can't guarantee.
For internal use there is a web interface available, which allows one to generate migration keys:
http://wallapex.de.oracle.com:7777/pls/marvel/f?p=O2O
To use this link you need a valid account and you must have access to the internal Oracle network.
External customers can use the established E-mail method to request keys automatically. This
access is granted on project basis and allows you to request key outside the Oracle network 7*24h
automatically.

6 Installing and executing the PL/SQL package

6.1 Installing the package software


The package software is shipped as a compressed tar file e.g. “package_0948.tar.gz”
Go to the O2O-root directory. Decompress the file with the gunzip untility. After this run the tar
command, to extract the archive. (e.g. tar -xvf package_0948.tar). Move the files*.plb and
changelog_*.txt to the SQL directory.

© Oracle SAP Solution Center Walldorf Page 13 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

Extract the file INPUT.tar with the tar utility. The files will be created automatically in the INPUT
directory. These files are templates for configuring the package as well as some pre-defined index
compression values.
The file changelog_xxx.txt contains a list of fixes and improvement available in the different
package versions.
Note: On some UNIX systems (e.g. HP-UX) gunzip command is not installed by default. You should
extract the file on your laptop first, before copying the software
Within the shipped tar-file you will find the following files:
• pack_reorg_xxx.plb
• pack_body_reorg_xxx.plb
• changelog_xxx.txt
• INPUT.tar
The PL/SQL package is shipped in a wrapped format and needs an activation key to run it. The
sofware is wrapped with Oracle 9i and therefore you need Oracle >= 9i to successfully load it into
the database. To load the package into the database execute both package files from sqlplus
(connect / as sysdba):
• @pack_reorg_xxx.plb (Expected result: Package created.)
• @pack_body_reorg_xxx.plb (Expected result: Package body created.)
If there is any other message especially something like
Warning: Package Body created with compilation errors.
It will not be possible to execute the package, because it could not be loaded into the database. In
this case execute the command “show errors” and safe the output for further clarification. Once the
package is successfully loaded into the database, you are ready to create the migration scripts (if
you have a valid activation key)

6.2 Defining PL/SQL package parameters


As soon as you have received the activation key, you can run the database analysis on the source
system and create the first set of migration scripts. To easier adaptions to the package parameters, I
recommend to create a file in the SQL sub-directory (e.g. start_o2o.sql), containing the package
execution command and the package parameters. The O2O functionality is stored in a PL/SQL
package, which is called with the appropriate parameters to activate or deactivate special
functionality. There are three different parameter types:
1. Variables defined as character or strings, which must be enclosed in single quotes <'>
2. Variables defined as numbers, which must not be delimited
3. Variables defined as Boolean type, which must be specified with TRUE or FALSE.

© Oracle SAP Solution Center Walldorf Page 14 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

The different variables are separated by a <,> character. The variable value is assigned with “=>”.
Upper or lower case in the parameter names doesn't matter. Please note that variable values can be
case sensitive (e.g. when specifying path names). If the parameters are specified by their name, the
order of the parameter doesn't matter. All parameters have defined default values. So if you don't
specify a variable by yourself the predefined setting becomes active.
So correct variable definitions would be:
• ACTIVATION_KEY => '1234567890' (String)
• MAX_SIZE => 200 (Number)
• SAP => TRUE (Boolean)
Normally all variables must be specified within a single line (no CR/LF). To overcome this
limitation you can use the following syntax, to list each variable in a single line, which makes it
easier to modify the package parameters:
Begin
reorg.reorg_start(act_key => 'Your Activation-Key'
, schema => 'saprac'
, pw => 'sap'
, system_pw => 'manager'
, check_lr => true
, sap => true
, sapdata => 4
, db_file_size => 20000
, opt => true
, max_size => 200
, db_link => 'V102_OLD'
, use_autoextend => true
, rexp => true
, index_compression => false);
End;
/
The <Begin – End> clause defines the start of a PL/SQL block. The command “reorg.reorg_start”
calls the start procedure of the package. All parameters are the listed on separate lines within the
brackets, separated by a <,> sign. If you don't want to use a variable any more, you can comment
the line with two dashes “--”. The last parameter must be followed by the “);” string
To execute the package with the parameters listed above you can execute the script from the sqlplus
command. You must by connected as a “sysdba” (sqlplus / as sysdba) user, to run the package.
SQL> @start_o2o.sql

Note: You will find a complete list of the package parameters and their default values at the end of
this document. You will get also an overview about the current package parameters by calling the
package with the command:
set linesize 160
set serveroutput on
exec reorg.reorg_start (show_help => true)
This functionality will list all package parameters, their default values and a short description
ordered by their functionality. This feature doesn't require an activation key.

© Oracle SAP Solution Center Walldorf Page 15 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

7 Setting up the target system


Setting up the target system is pretty the same as setting up the source system. We recommend to
use the same directory setup as on the source system. Please remember the recommendations
regarding the needed file system sizes. When running the migration, a lot of logfiles will be created.
So we need to have at least 2GB free space in the file system, where the scheduler is running. The
directories must belong to the oracle user, running the database. Beside the file system installation
you need a working Oracle installation on that host. For SAP systems the recommended method is
to perform a normal SAP installation, to ensure the environment of the SAP is set up correctly.
IMPORTANT HINT!!!!!
Please verify the setting of the environment parameter NLS_LANG on the target system,
which must match the character set of the source database. In SAP all new database will be
created with the character set UTF8. This character set is used for unicode systems, whereas
the default setting for the non-unicode systems is either ASCII (7bit character set, very old
SAP systems, installed before 4.0) or WE8DEC, all non-unicode systems installed with
SAP >=4.0.
If NLS_LANG is not specified correctly, during the migration Oracle will perform a character
set conversion, which can lead to data corruption or data loss. The scheduler is checking the
NLS_LANG settings of the system, and will not run as long as the NLS_LANG settings are
not corrected.

7.1 Setting up tnsnames


For O2O we use the Oracle net functionality to connect to the source system, reading tables and
data on the database. To enable this connectivity you have to specify the database connection string
in the file tnsnames.ora. You have to use the value of the package parameter “DB_LINK” for the
naming of the connection in tnsnames.ora You can use the utility “tnsping” to check for the
availability of the oracle listener on the source side.
V112.world =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(Host = localhost)(Port = 1521))
)
(CONNECT_DATA = (SID = V112)(SERVER = DEDICATED))
)
V102_OLD.world =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(Host = 192.168.0.50)(Port = 1521))
)
(CONNECT_DATA = (SID = V102)(SERVER = DEDICATED))
)

Note: tnsping doesn't perform a database connect, but checks only for the availability of the listener
process on the source machine. So even the tnsping works, it might be possible, that a subsequent
connection attempt with sqlplus is failing, because of a wrong host name resolution.

© Oracle SAP Solution Center Walldorf Page 16 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

7.2 Setting up the scheduler


Setting up the scheduling software is pretty simple. The scheduler is shipped as a compressed self-
extracting file, e.g. “V2.8_scheduler_26Oct2010.run.gz”. Log on as the oracle user, running the
migration. Copy the scheduler file to the O2O root directory.
First decompress the file with the command gunzip V2.8_scheduler_26Oct2010.run.gz
Note: On some UNIX systems (e.g. HP-UX) gunzip command is not installed by default. You
should extract the files on a different machine first, before copying
After this run the resulting file with “sh V2.8_scheduler_26Oct2010.run”. The extract will create
all needed sub-directories and will copy the scheduler files to the proper locations.

7.3 Database parameters for the migration


For the migration we need other database parameters, as for the normal database usage. Modifying
these parameters will increase the migration throughput significantly. So you should create for the
migration an own init.ora file. The sizing of all parameters is depending from the available system
resources, so the correct setting is system depending. In general you can use the following rules:
• Use a small database cache “DB_CACHE_SIZE” (2 – 5 GB ) only. Most of the migration
methods will use direct IO operations on the file system, bypassing the buffer cache. The
only operations, using the database buffer cache import and PL/SQL scripts. For these
operations you don't need a large buffer cache, because during the migration only insert
operations on complete empty tables are performed, so no read operations are needed on the
file system.
• Set the shared pool to 1GB size. We don't have to cache SQL statements
• Set the PGA area as large as possible (“PGA_AGGREGATE_TARGET”). Set the value
approx. 3-4 times higher than you really can use.
• Set “WORKAREA_SIZE_POLICY = AUTO”
• Set “PARALLEL_MAX_SERVERS” to a reasonable value (80 – 512). These processes are
used for parallel index creation and MCP method
• Set “PARALLEL_EXECUTION_MESSAGE_SIZE” to 16384
• Specify a LOG_BUFFER from 3MB – 20MB. This can be helpful when running a large
number of PL/SQL jobs. These jobs are performing bulk-inserts, needing large space in the
log buffer.
• Specify a higher number of “DB_WRITER_PROCESSES” (4)
• Increase the parameter “PROCESSES” and “SESSIONS” by the number of
PARALLEL_MAX_SERVERS

© Oracle SAP Solution Center Walldorf Page 17 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

7.4 Migration scenario: Using remote export


In the following chapters we will explain the usage of the scheduler. The general scheduler handling
will not differ for the different migration scenarios. So you can use the command explained in the
next chapters also for the other migration approaches, explained later.

7.5 Analysis and migration script generation


We will use this example from the previous section to create a very first set of scripts for a
migration. In this example we assume, that source and target database are having the same version,
so that we can run the export utility on the target host.
The intention is to get very rapidly scripts, so that at least the creation of the target database can be
started with. Within O2O, the database preparation phase (creation of target database, tablespaces,
users etc.) can be separated from the real migration. So you don't need the same script set for the
database setup and the migration. You can even use your own database migration scripts for
creating the target database.
Note: When migrating SAP systems you don't need to run the SAP typically Pre-migration tasks like
deleting temporary SAP tables. The O2O method accesses only the Oracle data dictionary,
therefore any SAP DD inconsistencies are copied as they are to the new system.
IMPORTANT HINT!!!!!
Do NOT run SAP Pre-migration tasks after generating the migration scripts, because these
pre-migration tasks will delete tables, forcing migration scripts to fail!
If you are using table- or tablespace mapping during the migration, you must have the same
configuration (e.g. same tablespace and/ or table mapping) for each run. Otherwise it would
be possible, that the migration scripts will try to assign tables to non-existing tablespaces or
that the calculated tablespace sizes are too small.
So for the very first run we will use a parameter setting, which allows us to run the analysis very
fast. This is helpful, if have to evaluate some key parameters (max_size). First some explanation to
the used script parameters. For a full list of all parameters refer to the list at the end of this
document. We will use the following parameter settings for out first migration:
Begin
reorg.reorg_start(act_key => 'Your Activation-Key'
, schema => 'saprac'
, pw => 'sap'
, login_user => 'system'
, login_pw=> 'manager'
, check_lr => true
, sap => true
, sapdata => 4
, db_file_size => 20000
, opt => true
, max_size => 200
, db_link => 'V102_OLD'
, use_autoextend => true
, rexp => true
, index_compression => false);
End;

© Oracle SAP Solution Center Walldorf Page 18 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

/
• ACT_KEY The needed activation key
• SCHEMA The SAP schema user. This name must be specified to allow the
package to use the correct assumptions needed for some functionality
• PW The password for the SAP user on the source system.
• PW_SYSTEM The password for the system account on the source system, This
password is used for the database link to connect to the source system
using an user account having enough permissions to access all tables
on the system.
• CHECK_LR Is used to force a length check on the SAP cluster tables. Although it
must be specified at least one time with “TRUE” for a fast run it's not
helpful. So we skip the check here.
Note: The Long type used in PL/SQL programs to migrate SAP
cluster and pool tables can only store up to 32760 Bytes. If there is
more data it will be truncated. Whereas we have the possibility in
SAP systems to perform a special length check on the LONG or
LONG RAW field, we don't have this in other applications. Inside the
Oracle database there is no method to estimate the amount of data
stored in a LONG or LONG RAW field. So by default migrating a
NON-SAP system, using tables with this data type, these tables will be
migrated with export/ import by default. With the package parameter
“SPECIFY_PLSQL_TABLE” you can force the package to create for
listed tables PL/SQL scripts instead. For SAP systems this check is
automatically executed, if the parameter “CHECK_LR => TRUE” is
specified.
IMPORTANT HINT!!!!
For NON-SAP systems you must be absolutely sure, that the data
stored in the LONG or LONG RAW field doesn't exceed a length
of more than 32760 bytes. Copying data with more than 32760
will result in a corrupted target system. Because within PL/SQL
data beyond the 32760 boundary will be truncated silently
• SAP Specifies whether the package is used for a SAP system or not. You
must set this parameter accordingly to the system you are working
with.
• SAPDATA The number of used sapdata directories or mount points. The package
assumes that all underlying file systems have the same size. All data
files will be distributed evenly on all available sapdata directories.
The package doesn't distinguish between table, index, undo or
temporary data. The algorithm within the package is designed to store
in all sub-directories the same amount of data.
Note: Between the sapdata directories a difference of the size of one
data file is possible.

© Oracle SAP Solution Center Walldorf Page 19 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• DB_FILE_SIZE The maximum size of the data files, used for the database. If a
tablespace is smaller than the data file size, data files are created
smaller. Otherwise the package will use the maximum size to create
the tablespaces. Please note: data file sizes are rounded up internally.
So it might happen, that a well reorganized system becomes larger on
the target than it is on source.
• OPT If set to “TRUE” the file system layout of the database is optimized in
respect to the number of data files and mount points. The tablespace
layout itself remains unchanged. If it is “FALSE” the original layout
is copied, without any modification to it.
• MAX_SIZE This parameter is most critical in respect to the migration process. It
determines the size (in MiB) a table must have, to migrate it with
separate reorganization scripts. If the table is smaller, it will be
migrated with export/ import.
• DB_LINK This parameter specifies the database link name which is used to
connect from the target system to the source system. The link will be
created with this name as part of the migration setup. Even if there is
a database link, you can't ommit this parameter.
• USE_AUTOEXTEND The data files of the target database are created with an initial size of
1 GB, having a maximum size as specified in the parameter
DB_FILE_SIZE. Using the autoextend feature offers two advantages:
The target database is created very fast, because each data file is
created with 1GB only. Secondly, the target database will be created
in the optimal size, without wasting space on the file system.
• REXP When running the export utility, it depends from the installed Oracle
versions on source and target system, where your export can run. If
source and target are running different releases, you must use the
export of the lower release to export the data from the source system.
This parameter specifies, where the export utility is running from
database perspective: If REXP is set to true, the export is started on
the target system, connecting remotely to the source system. If it is set
to false, the export is started on the source system, running local on
the source database.
• INDEX_COMPRESSION
By default this parameter is set to “TRUE”. For the first runs, when
estimating the appropriate package parameters, I recommend to set it
to “FALSE”. Calculating the correct index compression rate will
enlarge the first package runtime significantly, because all indexes
from the big tables must be analyzed. You should activate this feature
later, when all the other key parameters are determined, running the
first analysis over night.
Note: The package tries to read from the INPUT directory two
different files: The 1st file is “sap_indexes.conf”. This file contains
SAP predefined compression definitions for indexes ( as described in

© Oracle SAP Solution Center Walldorf Page 20 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

SAP note #1109743 So for these indexes no checks are executed on


the database, which saves time when the index analysis is running.
The file has the structure INDEX_NAME, COMPRESSION. The
index is automatically assigned to the SAP schema user. This applies
only if the parameter SAP => TRUE is specified.
The 2nd file is “indexes.conf” which allows one to specify index
compression definitions for any index in the database. The structure
of this file is OWNER,NDEX_NAME, COMPRESSION.
If a file is not found, this is logged into the packge log file.
The calculation results are stored in the table O2O_INDEX. So
consecutive runs will read this table, instead of recalculating the
indexes once again, shortening all further runs significantly. If you
stop a package run while the index calculation is active, the package
will start at the index it was interrupted. So previously executed index
calculations can be easily transported between systems: Once an
index calculation is executed, export the table O2O_INDEX, re-
import it to the new system and update the owner of the indexes (if the
index owner are different between the systems) with the command
update o2o_index set owner ='NEW' where owner = 'OLD'
We can now start to create the very first migration script set. The goal of the first runs should be to
get the best parameter setting for the package and therefore for the migration. A package run takes
normally between 15 minutes and 1h. The run time depends heavily from the current system load,
the IO performance of the system and the current CPU load. The package will run faster on ERP
systems than on BW systems. The number of partitioned indexes and the needed checks for these
objects enlarge the package runtime.
So let's start with the space calculation needed for the target system. The package calculates the
space for the target system by looking for each tablespace into the allocated space for database
objects, subtracting the free space (not allocated space) for each tablespace. This is a fast but also
rough estimation for the needed space. It will fail, if you have archived or deleted a large amount of
data, because this is not covered with the above described method. For a more exact calculation you
can set the package parameter “CHK_DBA_TABLES” which will calculate the space requirements
for each table based on the table statistics. Another possibility is to use the autoextend feature of the
database. Creating each data file only with a minimum size will end up with a perfect fitting
database size after the migration. Use the first test migration to estimate the final resulting database
size. This is the easiest way to get this information.
Next we have to find out the setting of the parameter MAX_SIZE. This parameter determines how
many tables (and therefore the amount of data) are copied by single scripts. Maybe you are
wondering, why not copying all tables with single scripts, but still using export/import, wherebe
everybody knows, that export/ import is very slow. It's the amount of tables which is the problem. A
SAP ERP 6 system has more than 80.000 tables, most of them containing only a few rows. For each
table we will create at least five scripts, each having a shell and a sql part, which doubles the
number once again. So for 80.000 tables we would end up in more than 800.000 files. Beside the
problems with the file system (can you store 800.000 files in a sub-directory), there is a problem
with the scheduling. How to handle 800.000 files in a scheduling software? Furthermore, when you
have to start a couple of scripts you need a minimum table size, to compensate the scheduling

© Oracle SAP Solution Center Walldorf Page 21 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

overhead. So we end up with the architecture to include larger tables in separate scripts, but we will
handle the smaller ones with export/ import.
Last, the export/import scripts are normally not the time critical part of a migration. These scripts
are running smoothly in the background and are finished normally in time without a problem.
To find now a good balance between export/ import and single table scripts we can use the logfile,
generated by package. The file is located in the package LOG directory and is named
<SID>_trans_timestamp.log
Beside the information about the package run you will find at the end of the file a summary about
the number tables copied by separate scripts, the amount of data handled by export/ import and the
calculated percentage of the data in respect to the space allocated by tables in the database included.
As a rule thumb you can use two key figures:
1. The amount of data handled by export/ import should not be larger than 5% of the table size.
So for a 1 TB system not more than. 50GB should be migrated by export/ import
2. The number of tables handled by separate scripts should not be more than 1000 tables. This
is not really a hard defined border, but if the number of scripts are to large, the scheduling of
these scripts will become inefficiently. The number of scripts which can be handled is also
depending from the available system resources the scheduling software is running on. From
our experience 1000 – 1500 scripts are a good starting point.
3. You will find recommendations about the parameter setting for MAX_SIZE at the end of the
transition logfile.
For larger databases very often it's not possible to follow both recommendations. Either the number
of scripts is to high, or too much data is included in the export import. A guideline in this situation
can be:
• If the source system is a slow machine (old CPU's) increase the number of single table
scripts, and reduce the amount of data in the export/ import scripts. Export is a very CPU
intensive operation, so slow source machines can significantly slow down the exports.
• If your system is running on fast CPU's you can also increase the number of single tables
scripts.
So a good starting point is a parameter setting of 100 for MAX_SIZE. On larger systems also values
of 25 are not uncommon. When you are drilling down, you should half the previous setting in each
run, until a you have found the settings for this database.
After the end of a package run please check for the completeness of the script generation. If the
package fails you will see normally an error message like:
ERROR at line 1:
ORA-29282: invalid file ID
ORA-06512: at "SYS.UTL_FILE", line 154
ORA-06512: at "SYS.UTL_FILE", line 790
ORA-06512: at "SYS.REORG", line 1541
ORA-06512: at line 2
If the package end up with an error message, please send the complete message, as well as the
package parameters used and the resulting log file to the O2O-development.

© Oracle SAP Solution Center Walldorf Page 22 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

Before sending the log file make sure that you have left the sqlplus session you have started the
package. If a script generation was successful, you will see at the end of the log file the message:

#################################################################
Summary for package run:
Space allocated by tables: 15410MB
Transferred by exp/imp: 3406MB
Number of table scripts: 137
More than 5% of the table data (23%) is copied with export/ import
You should reduce the amount of data copied with export
to < 5% (~ 771MB) by reducing the value of the MAX_SIZE (10) parameter
You should reduce the amount of data copied with export to < 5% by reducing the value of the MAX_SIZE
(10) parameter
Recommendation: Generate a new script set with a setting of MAX_SIZE => 2

Finished on:...................................................................................13:44:34 02-Nov-2010

7.6 Migration architecture


Once you have generated a complete set of migration scripts successfully, you can use these scripts
to start the migration. For this you have to copy the complete contents of the O2O/OUTPUT
directory to the target O2O/OUTPUT directory. Make sure that the target directory is empty before
copying the scripts. There is only one script set allowed in this directory, when the scheduling
software is started.
If you have a NFS share available you can run the scheduler immediately in this directory. Make
sure, that the user, used for the migration has write-permissions on this directory.
At the beginning some remarks about planning and running a migration. A migration must not
always be as fast as possible! It's more important is to use the migration time most effectivly and to
finish your migration in time, that is within the time frame you've got from customer. You can save
a lot of time and test migrations if you relinquish on “optimizing” a migration, which already would
be finish in time.
The next big beginner mistake is to try the export/ import scripts as soon as possible. So most
people will run as many exports as possible at the beginning. Please remember that it's sufficient if
ALL jobs finish in time. So it makes no difference for the success of the migration, if the export
scripts complete only at the end of the migration. You have to take into account, that export jobs
will load the source system with a very high CPU and IO load. So running too many exports in
parallel, will reduce the overall migration throughput.
Beside the single migration scripts the PL/SQL package will create a “migration map” file for the
current migration. This migration map file is named <SID>_trans_<date>.ctl (this file will be
referred in the document as trans.ctl). It contains a metadata description about the current
migration. The structure for reading the file is:
1. The migration is divided into different [PHASE] partitions. A new phase can only be started, if
all scripts within a previous [PHASE] partition are completed successfully
2. Within a partition single scripts must be executed in the listed order
3. If there are [SECTION] subpartition within a [PHASE] partition, these [SECTION] subpartitions can
be executed in parallel.

© Oracle SAP Solution Center Walldorf Page 23 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

4. Within a subpartition scripts must be executed in the given order. Scripts from a single
[SECTION] subpartition can't be executed in parallel
So for our test migration the transition control file looks like:
Transition Control File
Created for transition with timestamp 101102_152047 0 stephan 1.000 V102 SAP V112 OFFLINE WE8DEC

[PHASE] PREPARE TRANSITION


"Create sub-directories" V102_cr_dir_101102_152047.sh V102_cr_dir_101102_152047.log ToDo 000000
V102_cr_dir_101102_152047_undo.sh
"Create database" V102_cr_db_101102_152047.sh V102_cr_db_101102_152047.log ToDo 000000 NONE
"Create database DD" V102_cr_dd_101102_152047.sh V102_cr_dd_101102_152047.log ToDo 000000 NONE
"Create system user environment" V102_pupbld_101102_152047.sh V102_pupbld_101102_152047.log ToDo
000000 NONE
"Create Tablespacsid_targetes" V102_cr_tbs_101102_152047.sh V102_cr_tbs_101102_152047.log ToDo
000000 NONE
"Create database migration link" V102_cr_link_101102_152047.sh V102_cr_link_101102_152047.log ToDo
000000 V102_cr_link_101102_152047_undo.sh
"Create users and roles" V102_cr_usr_101102_152047.sh V102_cr_usr_101102_152047.log ToDo 00000
V102_cr_usr_101102_152047_undo.sh
"Create transition privileges" V102_cr_pmig_101102_152047.sh V102_cr_pmig_101102_152047.log ToDo
00000 NONE
"Create database links" V102_cr_dblink_101102_152047.sh V102_cr_dblink_101102_152047.log ToDo 00000
V102_cr_dblink_101102_152047_undo.sh
[PHASE] PERFORM TRANSITION
[SECTION] EXPORT CONTROL
"Running exports" V102_exp_101102_152047.ctl ToDo 000000 NONE
[SECTION] IMPORT CONTROL
"Running imports" V102_imp_101102_152047.ctl ToDo 000000 NONE
[SECTION] CTASPART_/PE1/DB_ITEM 1867M
"Create table" SAPRAC_ctas_part_1PE11DB....sh SAPRAC_ctas_part_1PE11D....log ToDo 000000 NONE 1867M
"Create index" SAPRAC_ind_1....sh SAPRAC_ind_1PE11DBITEM30_1....log ToDo 000000 NONE 93 M

In the file header you find listed information about the current settings of the source system, the
scripts were created on. This information is used by the scheduler to determine the runtime
environment.
The migration starts with the [PHASE] PREPARE TRANSISITON partition. Because there are
no[SECTION] subpartition listed, all scripts must be executed in the listed order.
The [PHASE] PERFORM TRANSITION can only be started, if all scripts from the [PHASE] PREPARE
TRANSITION were completed successfully. Within the [PHASE] PERFORM TRANSITION you find
listed [SECTION] subpartitions for export, import and single tables. (The table and index names are
truncated in the example to fit in to the document).
For each migration script there is a logfile listed, where the output from the script execution is
written to. For some scripts also a special “UNDO” script is generated (PL/SQL scripts and import)
If there is no undo-script you will find “NONE” at this position, otherwise a native script name. For
a given objects all scripts have the same name, only differing in the file extension.
So theoretically it is possible to run a migration completely manually, by using the transition control
file as the “road map” for the migration.

7.7 Scheduling software


The O2O-package will create a large number of scripts. Even for a small SAP system (25.000
tables, 32.000 indexes), approx. 4000 scripts will be generated. This number will differ between
systems and depends also from the current package parameters. From the number of created scripts
approx. the half must be executed. Because it's simply not possible to run all these scripts error-free
manually, as part of the O2O migration solution an automated program for running the migration is

© Oracle SAP Solution Center Walldorf Page 24 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

shipped. This program named “scheduler” is written in ksh-shell syntax. The ksh-shell is available
on all UNIX/ Linux systems. For Windows there is an emulation available named “Cygwin”. From
our experience this emulation is under some circumstances (high system load, IO intensive
operation) not very stable. So it's not a recommended solution.
Using a ksh-shell program offers one the advantage to run the migration within a normal terminal
window, without the need for using a GUI interface. There are also no other software dependencies
(e.g installing e special java version), for running the migration.
The scheduler will:
• Perform health checks in respect to the existence of all scripts and the correct setup of the
environment
• Ensure that there is only one migration script set
• Run all scripts in the correct order
• Consider dependencies between scripts (e.g. a index can only be created if the table was
created first; an import can only be started if the corresponding export is finished)
• Execute checks on all scripts and log files to prove the error free execution
• Execute checks on the target system for possible database problems (e.g. suspended jobs,
number of available PQ server processes)
The scheduler process itself runs in the background. So once the scheduler was started, you can
disconnect from the system, while the scheduling is still running in the background. You can
reconnect to any current migration all the time, with a new session to the server using the same user
account. Once you are in the /O2O/OUTPUT directory you can use the scheduler command to
control the running migration.
The scheduler contains functionality to restart or skip failed jobs, if necessary. So also the
scheduling software will ensure that all migration script are executed correctly, it's up to you to
make the correct decisions how to proceed, if there are errors during the script execution.

Scheduler runs the scripts, but the migration is your responsibility!


The scheduler will always recognize the current migration phase and will start only the appropriate
scripts. But with wrong decisions and commands it's possible to finish even an unsuccessful
migration, which ends up in an unusable target system.
IMPORTANT HINT!!!!!
The scheduler uses for the control of a running migration a set of scripts, ending on “.ctl” in
the O2O/OUTPUT directory. Don't modify these scripts if you don't know what you are
doing!
If you have to modify the scheduler specific control files, ensure that there is no running
scheduler anymore. Otherwise you can run into the situation that the migration control
scripts are inconsistent. This could end up with an incomplete migration and a lot of manual
work (running all scripts manually). If there is the need to manually change the scheduler
control files execute the following steps:

© Oracle SAP Solution Center Walldorf Page 25 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

1. Stop the scheduler with the command “../bin/scheduler -k”


2. Ensure that the scheduler is really stopped
3. Make a safe copy of the file
4. Modify the file
5. Restart the scheduler with the command “../bin/scheduler”
Normally there is no need to change the scheduler control files manually. We strongly
recommend to change these files only with advice from O2O development.
To give you an overview about the migration we use for the very first time the predefined scripts in
our environment. We will discuss the different scheduler parameters and their meaning later in
detail.
Note: During the migration the scheduler is always started from the O2O/OUTPUT directory with
the command ../bin/scheduler [options]
You can get a complete list of the available scheduler options with the command ../bin/scheduler -h

7.8 Migration script conversion


The first step is to convert the migration scripts to a format, which can be used by the scheduler for
the migration. For this the migration control files created by the package must be converted into a
scheduler specific format. To start the convertion invoke the command “../bin/scheduler -C “ within
the O2O/OUTPUT directory.
The conversion can take some time, depending from the number of migration scripts. On slow
systems the conversation can take up to 20 minutes. The scheduler prints out a feedback about the
progress of the conversion. Once the conversation is completed you are ready to start with the
migration.
As part of the conversion the scheduler prompts you for two questions:
ora112@stephan:~/O2O/OUTPUT> ../bin/scheduler -C

02Nov2010 scheduler PID: 10711 running on host: stephan


16:28:24 info: Start of scheduler - commandline used: ../bin/scheduler -C
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

16:28:24 read: Existing transition controlfile will be converted to scheduler specific format.
Is your "../etc/scheduler_conf.txt" correctly setup for
- remote/local export and/or file copy operations?
- reasonable values for parameter RAW_TAB_BOOST and LOB_TAB_BOOST? {Yes|No} yes

16:28:32 info: Reading ../etc/scheduler_conf.txt...

16:28:32 info: Raw table priority will be boosted by factor 3 (RAW_TAB_BOOST=3)


Lob table priority will be boosted by factor 6 (LOB_TAB_BOOST=6)

Answer these questions for out first migration with “Yes”


Then you will be prompted for the usage of “brconnect” (SAP only) for calculating new statistics
after the migration:
16:28:32 read: Do you plan to run brconnect within POST phase instead of importing whole schema
statistics? {Yes|No} no

© Oracle SAP Solution Center Walldorf Page 26 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

16:28:41 info: Your choice: NO --> no brconnect run during POST phase.
stat jobs during PERFORM phase will be disabled and
statschema job will be run during POST phase.

Answer this question with “no”


Note: On databases having a large number of tables and indexes it will be faster to use the SAP
tool “brconnect” for creating statistics only for those tables, having none instead of running a
complete import of the statistics for the whole schema. This option is available only for SAP
systems.
After this the scheduler will start to convert the different migration files and to create the migration
specific control files.
16:28:41 info: Creating V102_prep_101102_152047.ctl...

16:28:41 info: Creating V102_tab_101102_152047.ctl...

16:28:41 info: Creating V102_idx_101102_152047.ctl...

16:28:41 info: Creating V102_stat_101102_152047.ctl...

16:28:41 info: Creating V102_verisrc_101102_152047.ctl...

16:28:41 info: Creating V102_veritrg_101102_152047.ctl...

16:28:42 info: ...working on table number 50 / 136, elapsed sofar 00:00:01, remaining 00:00:01

16:28:43 info: ...working on table number 100 / 136, elapsed sofar 00:00:02, remaining 00:00:00

16:28:44 info: Creating V102_check_101102_152047.ctl...

16:28:44 info: Creating V102_post_101102_152047.ctl...

16:28:44 info: Creating V102_final_101102_152047.ctl...

16:28:44 info: Working on V102_tab_101102_152047.ctl...

16:28:45 info: Working on V102_idx_101102_152047.ctl...

16:28:46 info: Working on V102_verisrc_101102_152047.ctl...

16:28:46 info: Working on V102_veritrg_101102_152047.ctl...

16:28:47 info: Working on V102_stat_101102_152047.ctl...

16:28:47 info: Working on V102_exp_101102_152047.ctl...

16:28:47 info: Working on V102_imp_101102_152047.ctl...

16:28:48 info: Scheduler not configured to copy remote export dumpfiles (see
../etc/scheduler_conf.txt)
or O2O parameter REXP=>TRUE was used on O2O script generation
==> no V102_cop_101102_152047.ctl needed/created!

16:28:48 info: Creating scheduler monitoring base...

16:28:48 info: New transition controlfile plus dependant files are generated.

16:28:48 info: Now changing permissions for all shell scripts to 755...

16:28:48 info: You may perform consistency checks on controlfiles and scripts through
../bin/scheduler -c
(found ~1935 files, this can take some time...)

16:28:48 info: You may perform connectivity tests (tnsping, rsh, rcp, ...) through
../bin/scheduler -t

16:28:48 info: Conversion phase completed!


[scheduler exiting]

© Oracle SAP Solution Center Walldorf Page 27 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

Note: As part of the conversion the original trans.ctl file is renamed to trans.ctl.original. The
renamed trans.ctl file is in scheduler specific format and doesn't list any more the single tasks but
only the main steps of the migration.
The migration script conversion is a one-time task and must run at the beginning of each migration.
If you try to start a migration without valid scheduler control files, you will get an error messages
from the scheduler:
ora112@stephan:~/O2O/OUTPUT> ../bin/scheduler

02Nov2010 scheduler PID: 10098 running on host: stephan


15:28:10 info: Start of scheduler - commandline used: ../bin/scheduler
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

15:28:10 warning: Transition controlfile V102_trans_101102_152047.ctl is in O2O format,


we will need scheduler specific format.
Please, convert the trans ctlfile via "../bin/scheduler -C"
[scheduler exiting]
ora112@stephan:~/O2O/OUTPUT>

In this case run the scheduler with the option -C to start the conversion step first.
If there is more than one transition control file in the O2O/OUTPUT directory the scheduler will
stop processing with the message:
../bin/scheduler -C

04Nov2010 scheduler PID: 24574 running on host: stephan


11:29:06 info: Start of scheduler - commandline used: ../bin/scheduler -C
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

11:29:06 fatal: transition controlfile not unique:


V102_trans_101103_161947.ctl
V102_trans_101104_104456.ctl
[scheduler exiting]

You have to delete one migration script set from the O2O/OUTPUT directory.

7.9 Creation of the target database


The migration is split into four major steps. Preparation, migration, check and post migration tasks.
The O2O package will create all needed scripts for all these steps. Normally all scripts, used for a
migration must be from the same package run. This applies to migration, check and post migration
scripts, but not to the scripts needed for the preparation of the target system. The target database can
be created with own-made scripts, with scripts generated by a different package run or with
modified package scripts. As part of the preparation the following steps will be executed:
• "Create sub-directories" => V102_cr_dir_101102_134100.sh
Creates all sub-directories needed for control files, data files and redo log files. Please make
sure that the control_file parameter of the database init.ora ponts to the correct directory
names. For SAP systems the package will use the following default directories for the
control files:
For SAP systems:
• /oracle/V112/sapdata1/cntrl
• /oracle/V112/origlogB/cntrl
• /oracle/V112/origlogA/cntrl

© Oracle SAP Solution Center Walldorf Page 28 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

For NON-SAP systems


• “/database/oracle/V112/db_1/cntrl”
• “/database/oracle/V112/db_2/cntrl”
• “/database/oracle/V112/db_2/cntrl ”
• "Create database" => V102_cr_db_101102_134100.sh
Creates the database, with the “create database command”
• "Create database DD" => V102_cr_dd_101102_134100.sh
Creates the database data dictionary
• "Create system user environment" => V102_pupbld_101102_134100.sh
Runs the “pupbld” script for the system user for assigning the default role
• "Create Tablespaces " => V102_cr_tbs_101102_134100.sh
Creates all needed tablespaces
• "Create database migration link" => V102_cr_link_101102_134100.sh
Creates the database link needed for the migration
• "Create users and roles" => V102_cr_usr_101102_134100.sh
Creates the database users, the migration is executed for
• "Create transition privileges" => V102_cr_pmig_101102_134100.sh
Grants sufficient privileges for the migration to the database users
• "Creates database links" => V102_cr_dblink_101102_134100.sh
Creates the database links, if any.

IMPORTANT HINT!!!!!
In former releases it was possible to get beside the user name used for the database
link also the password used. This is no longer possible for security reasons. So it's a
known limitation that in the database link definition the passwords for the database
link are missing. So you have to modify this script manually, otherwise it will fail.
So you can either use the package prepared scripts for the migration or your own ones. If you are
using self created scripts, ensure that the target database will use the same character set as the
source database. If you are in doubt, look into the package generated create database script. It uses
always the same character set for the target database as used for the source database.
It's also possible to re-use an existing database for consecutive migrations instead of creating all the
times a new database. If you want to reuse a existing database you must drop all users, which are
migrated first. You have then to start with the preparation phase with the step “Create users”.
It's also possible to use a customized database set of scripts for creating the target database
Note: The package will always execute a fresh calculation about the needed tablespace sizes on the
target system. So if you are using older scripts, the tablespace sizes can have become to small.

© Oracle SAP Solution Center Walldorf Page 29 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

If you are using the scheduler for executing the scripts start the preparation phase now with the
command: “../bin/scheduler”. Once again you are prompted to ensure that you want to start the
preparation phase now:
ora112@stephan:~/O2O/OUTPUT> ../bin/scheduler

03Nov2010 scheduler PID: 12771 running on host: stephan


08:16:58 info: Start of scheduler - commandline used: ../bin/scheduler
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

08:16:58 info: You are about to start the PREPARE TRANSITION phase.

08:16:58 read: Are you ready to create a new db now (instances stopped / old DB files deleted...)?
{Yes|No|Skip} yes

There are three possibilities at this point:


• No: Stop the migration at this point. Scheduler will stop and nothing is started
• Yes: Start the preparation phase now
• Skip: Is useful, when you have already an existing database. Skip the preparation phase
now and proceed with the migration instead.
For our test answer this question with “Yes”.
The scheduler requests then the final confirmation for starting the migration.. Answer this question
once again with “Yes” (There is also an option to run the scheduler in a “Demonstration mode”.
Please refer to chapter (9) for a detailed explanation about this option)
08:17:17 read: !! Real processing, no DemoMode !!
Do you really want to start with the current setup? {Yes|No} yes

08:17:28 info: Trying to start background scheduler:


nohup /home/ora112/O2O/bin/bscheduler prep V102_prep_101102_152047.ctl >>
../LOG/bscheduler.log 2>&1 < /dev/null &

08:17:28 info: background scheduler started, PID=00013079

08:17:28 info: Scheduling will be done in background.


Have a look at ../LOG/bscheduler.log for details.
[scheduler exiting]

From now the scheduler runs in the background. There is no output anymore to the current terminal
window. This allows one, to disconnect from the current session, without stopping the scheduling or
loosing information about the current migration progress. The process responsible for running the
migration scripts in background is called “background scheduler”. This process will write all
information to the O2O/LOG directory. This directory contains the whole migration history.
Therefore be careful before deleting this directory. Especially if you have encountered some
problems during the migration, the contents of this directory is essential for the error analysis. So
it's best practice, even after a successful migration, not to delete this directory, but to rename it only.
During the migration the scheduler will:
• copy all logfiles from all scripts to this directory
• if a job is executed several times, the scheduler will create a logfile history from all runs in
this directory (the older logfiles are numbered)
• log detailed data about script execution, current migration status, number of errors and so on
are logged to the file “bscheduler.log”. For monitoring the migration process run a
“tail -f bscheduler.log” command on this file to see the output.

© Oracle SAP Solution Center Walldorf Page 30 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

7.9.1 Common scheduler checks


When the background scheduler is started, the process performs a number of checks to ensure the
correct environment for the migration. First of all the current hostname and the database SID
(environment parameter ORACLE_SID) are checked. If the hostname and the SID are the same as
the ones used for the script generation the scheduler assumes that it's running against the source
system. To protect the source database from damage, the scheduler will stop processing. No script is
started and an error message is logged.
Then the NLS_LANG setting for the migration is checked against the database character set. If
there is a mismatch the scheduler will stop processing and an error message is logged.
After this the current migration parameters from the file O2O/etc/scheduler_conf.txt are read. The
current parameter settings as well as some internal adjustments are logged to the bscheduler.log file.
These checks are performed every time the scheduler is started, to ensure that all scripts are
executed in the correct environment. Especially when login to the machine with a different user
name or running multiple terminal session in parallel, it easy to become confused.
For the first test migration we take the predefined default values. We will discuss the scheduler
parameters and how to use them to control the migration, later in detail.
********************************************************************************

03Nov2010 bscheduler PID: 13079 running on host: stephan


08:17:28 info: Start of background scheduler - commandline used: /home/ora112/O2O/bin/bscheduler
prep V102_prep_101102_152047.ctl
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

08:17:28 info: Parameter change: new: RAW_TAB_BOOST=3 old: 1

08:17:28 info: Parameter change: new: LOB_TAB_BOOST=6 old: 1

08:17:28 info: Parameter change: new: VST_SIZE=1000 old: 0

08:17:28 info: Parameter change: new: DMP_COPY_METHOD=scp old: NONE

08:17:28 info: Parameter change: new: REMOTE_SHELL_CMD=ssh old: NONE

08:17:28 info: Parameter change: new: SOURCE_HOST=192.168.0.50 old: NONE

08:17:28 info: Parameter change: new: SOURCE_USER=ora102 old: NONE

08:17:28 info: Parameter change: new: SOURCE_DIR=/home/ora102/O2O/OUTPUT old: NONE

08:17:28 info: Parameter change: new: SLEEPTIME=15 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_EXPTAB=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_EXP=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_COP=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_IMP=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_SMALLTAB=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_TAB=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_RAWTAB=50 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_LOBTAB=50 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_IDP=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_SMALLIDX=3 old: 0

© Oracle SAP Solution Center Walldorf Page 31 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

08:17:29 info: Parameter change: new: NUM_JOBS_IDX=4 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_VERISRC=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_VERITRG=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_STAT=1 old: 0

08:17:29 info: Parameter change: new: MCP_DIR=/oradump/CCR/MCP_OUTPUT old: NONE

08:17:29 info: Parameter change: new: NUM_JOBS_XCHG=1 old: 0

08:17:29 info: Parameter change: new: NUM_JOBS_MCP=100 old: 0

08:17:29 info: Automatic parameter adjustment: new: NUM_JOBS_CHECK=50 old: 5

7.9.2 Scheduler status: Preparation phase


Once the parameter check is finished, the scheduler will start the pending migration scripts. Each
script is listed with the script name stared and the current process ID (PID) on the host. At the end
you will find a summary about the number of jobs running (Run), waiting for execution (Wait),
waiting to become released (ToDo) if the preconditions are fulfilled.
At the end you will see a summary about the number of jobs ended with an error or warning
message (failed). This counter should be always zero. The “suspended jobs” sections lists sessions
on the database, having the status “suspended”. The O2O method is using the Oracle feature
“resumable transaction” by default. The timeout is set to 86400 seconds (one day).
08:17:29 info: (1) V102_cr_dir_101102_152047.sh started, PID=00013855

08:17:34 stephan StAtUs: Run/Wait/ToDo: prep=1/8


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

08:17:36 info: (2) V102_cr_dir_101102_152047.sh finished successfully.

08:17:36 info: (2) V102_cr_db_101102_152047.sh started, PID=00014164

08:17:36 stephan StAtUs: Run/Wait/ToDo: prep=1/7


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

The output is refreshed each time a new job is started or finished.


08:19:46 info: (54) V102_cr_db_101102_152047.sh finished successfully.

08:19:46 info: (54) V102_cr_dd_101102_152047.sh started, PID=00018192

08:19:46 stephan StAtUs: Run/Wait/ToDo: prep=1/6


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

If an error occurs, the failed job and the Oracle error message is logged:
08:25:14 info: (209) V102_cr_dd_101102_152047.sh failed due to the following error:
ORA-01432: public synonym to be dropped does not exist
ORA-01432: public synonym to be dropped does not exist
ORA-00942: table or view does not exist

08:25:15 stephan StAtUs: Run/Wait/ToDo: prep=0/6


stephan StAtUs: current number of failed/suspended jobs: 1 / 0

Because in the preparation phase the next script can only be executed, if the one before was
successfully finished (refer to chapter 7.6), the scheduler has stopped for now. You see in the output:
• Run/Wait/ToDo: prep=0/6
• current number of failed/suspended jobs: 1 / 0

© Oracle SAP Solution Center Walldorf Page 32 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

So there is no job running at the moment and one failed job. From the error message we see the
Oracle error ( ORA-01432: public synonym to be dropped does not exist) and the executed job was
V102_cr_dd_101102_152047.sh. You will find a logfile with the name
V102_cr_dd_101102_152047.log in the O2O/OUTPUT directory. So we will have a look into the
logfile, to check the current status of this step. The question we have to answer is very simple:
Is this error message a critical error, or is it possible to safely ignore this message?
The answer to this question decides about the needed actions:
1. If this is a critical error, then we must run the scripts without any error to get an error-free
migration.
2. If we can safely ignore this error, proceeding with the next step of the migration?
So in the case of an error the migration responsible has to decide how to proceed.
For the error analysis have always a look into the logfile created by the failed job. The logfile name
is the same as for the script, but uses the extension .log. From the logfile you can estimate the
severity of this error. In our example the error occurs during the creation of the Oracle database data
dictionary. The error message reports that a public synonym doesn't exist when it should be
dropped. So in this case it's possible to safely ignore the error message.
The scheduler has an option to reset the status either of a single job or a job list. We want to set this
particular job to the status completed. So we start the scheduler with the command:
../bin/scheduler -d V102_cr_dd_101102_152047
This will force the scheduler to set this job to the status completed. Once the status Is changed the
scheduler will execute the next script in the chain.
../bin/scheduler -d V102_cr_dd_101102_152047

03Nov2010 scheduler PID: 30443 running on host: stephan


13:43:41 info: Start of scheduler - commandline used: ../bin/scheduler -d V102_cr_dd_101102_152047
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

13:43:41 info: Setting status for entry V102_cr_dd_101102_152047.sh in V102_prep_101102_152047.ctl


to Done

13:43:41 info: Bye, bye...


[scheduler exiting]

In the bscheduler.log file you can monitor the progress of the migration:
03Nov2010 bscheduler PID: 13079 running on host: stephan
13:43:43 info: (9194) V102_pupbld_101102_152047.sh started, PID=00030771

13:43:43 stephan StAtUs: Run/Wait/ToDo: prep=1/5


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

13:43:45 info: (9195) V102_pupbld_101102_152047.sh finished successfully.

13:43:45 info: (9195) V102_cr_tbs_101102_152047.sh started, PID=00031110

13:43:45 stephan StAtUs: Run/Wait/ToDo: prep=1/4


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

As long as the background scheduler is running, any change to the job status is picked up
automatically. If the status of a failed job is set manually (by invoking the scheduler with the -d
option) to completed, the execution is automatically going ahead to the next step.

© Oracle SAP Solution Center Walldorf Page 33 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

14:57:11 stephan StAtUs: Run/Wait/ToDo: prep=1/1


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

14:57:13 info: (11224) V102_cr_pmig_101102_152047.sh finished successfully.

14:57:14 info: (11224) V102_cr_dblink_101102_152047.sh started, PID=00017226

14:57:14 stephan StAtUs: Run/Wait/ToDo: prep=1/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

14:57:16 info: (11225) V102_cr_dblink_101102_152047.sh failed due to the following error:


ORA-01017: invalid username/password; logon denied

14:57:16 stephan StAtUs: Run/Wait/ToDo: prep=0/0


stephan StAtUs: current number of failed/suspended jobs: 1 / 0

You see there are errors for the script V102_cr_dblink_101102_152047.sh. It's the previously
described know limitation when creating existing database links(refer to chapter 7.9, database
links). So you have to modify the SQL-script manually (the SQL script has the same name as the
shell script, but with the ending “.sql”). Once the SQL syntax is corrected you can restart the failed
script with the command
../bin/scheduler -r V102_cr_dblink_101102_152047
../bin/scheduler -r V102_cr_dblink_101102_152047
03Nov2010 scheduler PID: 4613 running on host: stephan
15:06:55 info: Start of scheduler - commandline used: ../bin/scheduler -r
V102_cr_dblink_101102_152047
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

15:06:56 info: Resetting entry V102_cr_dblink_101102_152047.sh in V102_prep_101102_152047.ctl to


ToDo

15:06:56 info: Bye, bye...


[scheduler exiting]

The scheduler restarts now the failed script. If the execution was successful
15:09:00 info: (11556) V102_cr_dblink_101102_152047.sh started, PID=00009786

15:09:00 stephan StAtUs: Run/Wait/ToDo: prep=1/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

15:09:02 info: (11557) V102_cr_dblink_101102_152047.sh finished successfully.

15:09:02 stephan StAtUs: Run/Wait/ToDo: prep=0/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

15:09:02 info: Transition preparation phase completed!


Perform transition phase can be started at any time.
[bscheduler exiting]

The scheduler has now recognized, that the prepare phase is completed. The background scheduler
is stopped and there is no further script execution.
There is now the choice, either to proceed with the migration, or to start the migration later with a
different script set.

7.10 Starting the perform migration phase


You start the real migration (defined as transfer of data from source to target) with the command
../bin/scheduler

© Oracle SAP Solution Center Walldorf Page 34 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

The scheduler will recognize automatically the current status of the migration. If you start the
perform migration step the very first time, you will be prompted by the scheduler
./bin/scheduler

03Nov2010 scheduler PID: 10318 running on host: stephan


15:18:23 info: Start of scheduler - commandline used: ../bin/scheduler
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

15:18:24 info: You are about to start the PERFORM TRANSITION phase.

15:18:24 read: Do you really want to start the actual transition now? {Yes|No} yes

15:23:14 read: Prerequisites: Application down? Parallel degree on? {Yes|No} yes

15:23:16 read: !! Real processing, no DemoMode !!


Do you really want to start with the current setup? {Yes|No} yes

15:23:18 info: Trying to start background scheduler:


nohup /home/ora112/O2O/bin/bscheduler trans V102_exp_101102_152047.ctl
V102_imp_101102_152047.ctl V102_tab_101102_152047.ctl V102_idx_101102_152047.ctl
V102_verisrc_101102_152047.ctl V102_veritrg_101102_152047.ctl >> ../LOG/bscheduler.log 2>&1 <
/dev/null &

15:23:18 info: background scheduler started, PID=00010919

15:23:18 info: Scheduling will be done in background.


Have a look at ../LOG/bscheduler.log for details.
[scheduler exiting]

You have to answer the different questions with “Yes” This questionnaire might look a little bit
annoying, but from our experience it's better to ask these questions at the beginning of the real
migration. Once again, with the above command the background scheduler is started, doing it's
work silently in the background.

7.11 Monitoring the migration

7.11.1 Database monitoring


The best way to monitor the progress of a migration is to observe the growth of the target database.
If you select the size of the database from the view DBA_SEGMENTS it's very easy to get a good
feeling about the current progress as well as to make a prediction about the estimated time for
completion. More important you will recognize any delay in time. There are only two scripts
needed, to run the database growth monitoring. One shell script which calls in regular intervals a
sqlplus script, selecting the allocated bytes from DBA_SEGMENTS. Here is a simple example:
The shell script will execute every five minutes the SQL-script “get_size.sql” as long it's not
interrupted.
while true
do
sqlplus "/ as sysdba" @get_size.sql
sleep 300
done

The SQL-script “get_size.sql” is also pretty simple. The output of the statement is size (in MB),
allocated by database objects (table and indexes)
spool get_size.log append
select sum(bytes) / 1024 / 1024 from dba_segments;
set time on
exit

© Oracle SAP Solution Center Walldorf Page 35 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

The script will write to the log file get_size.log, appending the new output. Beside the current size a
time stamp is written to the log file, which allows one to perform an extrapolation about the current
database growth per hour.
Beside this simple script you should always run a tail -f command also on the alert.log of the
database, especially to see any space related errors.
Also helpful is to analyze the current wait events of the oracle sessions in regular intervals to get a
feeling about any possible performance bottlenecks. The scripts are:
while true
do
sqlplus "/ as sysdba" @get_session_wait.sql
sleep 60
done

The SQL-script "get_session_wait.sql" is:


spool get_ession_wait.log append
select sid,event
from v$session_wait where event not like 'rdbms%' and event not like 'SQL*Net%'
order by event,sid;
set time on
exit

You will need some Oracle background knowledge to interpret the output and to draw the correct
conclusions from it. But it could also be very helpful, if you have to perform a problem analysis
after the migration.

7.11.2 Scheduler monitoring


The best way to monitor the script scheduling permanently is to run the tail -f command on the
bscheduler.log file in the O2O/LOG directory. Looking at this file you will be aware about the
current status (number of failed/ suspended jobs). To run a migration smooth and fast, the number
of failed jobs must near zero. If there is a major problem either with the system configuration or
with the used scripts, you will see this number permanently increasing. As soon the background
scheduler is running the output looks similar to this:
12:53:58 info: Automatic parameter adjustment: new: NUM_JOBS_CHECK=50 old: 5

12:53:58 info: (1) O2O7OGG_exp_101104_121841.sh (size 236) started, PID=00025035

12:53:59 info: (1) SAPRAC_iot_ZISOBJ_101104_121841.sh (size 0) started, PID=00025322

12:53:59 stephan StAtUs: Run/Wait/ToDo: exp=1/26 cop=0/0/27 imp=0/0/26 tab=1/12 idx=0/0/16


verisrc=0/0/13 veritrg=0/0/13
stephan StAtUs: current number of failed/suspended jobs: 0 / 0

In the output of the file you will see the last started/ finished jobs. The number in brackets behind
the job name is the size of this table or the amount of space located in this job. So the job
O2O7OGG_exp_101104_121841.sh (size 236) transfers 236MB.
The status bar now contains much more information, then during the preparation phase run. In
detail the status line lists for the different main job types the number of running jobs and waiting
jobs. There are two different types of jobs. Export and table jobs, not having dependencies to other
scripts. These jobs can be executed at any time.
• Exp=1/26 => There are 26 export jobs to run. Actual 1 export job is executed
• tab=1/12 => There are 12 tables jobs to run. Actual 1 table job is executed

© Oracle SAP Solution Center Walldorf Page 36 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

Jobs having dependencies to other scripts, which must be executed before. These jobs have to wait
until the predecessor has finished. These jobs have three counters: Number of jobs running, number
of jobs queued for running (the predecessor jobs has finished), number of jobs waiting on the
completion of the predecessor job.
• Imp=0/0/26 => No import is running, none is queued , 26 jobs are waiting on the
predecessor job (export jobs) to complete
• idx=0/0/16 => No index creation is running, none is queued (table was created
successful), 16 indexes are waiting on the predecessor job (create table) to complete
• verisrc=0/0/13 => No row count verification is running on the source, none is
queued, 13 count jobs for the source system are waiting on the predecessor job (create table)
to complete
• veritrg=0/0/13 => No row count verification is running on the target, none is queued,
13 count jobs for the target system are waiting on the predecessor job (create table) to
complete
At the end you find a summary for the number of failed or suspended jobs (refer to chapter 7.9.2 for
an explanation)
After some runtime the output looks like this:
13:27:02 info: (124) SAPRAC_exp0009_101104_121841.sh (size 238) finished successfully.

13:27:02 info: (124) SAPRAC_exp0010_101104_121841.sh (size 138) started, PID=00001366

13:27:02 info: (124) SAPRAC_cop0009_101104_121841 (size 238) started, PID=00001506

13:27:03 stephan StAtUs: Run/Wait/ToDo: exp=1/7 cop=1/0/8 imp=1/3/9 tab=1/6 idx=1/0/13


verisrc=1/0/7 veritrg=1/0/7
stephan StAtUs: current number of failed/suspended jobs: 2 / 0

Once again you can see the when jobs are started or completed. In the bar line you can see now a
different display for the current migration status.
• There is still one export running and 6 are waiting to get started (exp=1/7).
• From the import jobs 1 is running, 3 are queued and 9 are waiting for the export to complete
(imp=1/3/9 ).
• There is 1 table job running and 7 are waiting to get started (tab=1/7)
• There is one index creation running, none is queued and 13 are waiting for the table creation
to complete.
• There is 1 verification job running on the source, none is queued, and 7 are waiting for the
table creation to complete.
• There is 1 verification job running on the target, none is queued, and 7 are waiting for the
table creation to complete
At the and 2 jobs are listed as failed. None job is currently suspended. So we have to find out which
jobs are in the status “Failed”.The schedulers has an option, to get a list of the failed jobs. Run the
command ../bin/scheduler -f to get the list about failed jobs:

© Oracle SAP Solution Center Walldorf Page 37 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

ora112@stephan:~/O2O/OUTPUT> ../bin/scheduler -f

04Nov2010 scheduler PID: 21457 running on host: stephan


14:14:46 info: Start of scheduler - commandline used: ../bin/scheduler -f
The following scheduler kit is in place: V2.8_scheduler_26Oct2010.run

List of failed
entries:_________________________________Status______PID_Size___FailureTime__Done/Total
OPS2SAPRACADM_imp_101104_121841 FAIL 00000000 1 04Nov 12:55
SAPRAC_imp0000_101104_121841 FAIL 00000000 1 04Nov 13:08
SAPRAC_imp0012_101104_121841 FAIL 00000000 168 04Nov 13:48

14:14:46 info: Bye, bye...


[scheduler exiting]

For analyzing the error we will have a look into the logfile of the particular job. So we can run the
”more” command on the logfiles:
ora112@stephan:~/O2O/OUTPUT> more OPS2SAPRACADM_imp_101104_121841.log

Import: Release 11.2.0.1.0 - Production on Thu Nov 4 12:55:04 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Export file created by EXPORT:V10.02.01 via conventional path

Warning: the objects were exported by SYSTEM, not by you

import done in WE8DEC character set and AL16UTF16 NCHAR character set
IMP-00017: following statement failed with ORACLE error 24087:
"BEGIN "
"DECLARE"
"AGENT_EXISTS exception; "
"PRAGMA EXCEPTION_INIT(AGENT_EXISTS, -24089);"
"BEGIN "
"dbms_aqadm.create_aq_agent(agent_name => 'OPS$SAPRACADM', enable_http => F"
"ALSE, enable_smtp => FALSE, enable_anyp => FALSE); "
"dbms_aqadm.enable_db_access(agent_name => 'OPS$SAPRACADM', db_username => '"
"OPS$SAPRACADM'); "
"EXCEPTION WHEN AGENT_EXISTS THEN NULL; END; "
""
"COMMIT; END;"
IMP-00003: ORACLE error 24087 encountered
ORA-24087: Invalid database user OPS$SAPRACADM
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 10564
ORA-06512: at "SYS.DBMS_AQADM", line 1342
ORA-06512: at line 7
. . importing table "SAPUSER" 0 rows imported
. . importing table "TEST_SEL" 0 rows imported
Import terminated successfully with warnings.

From the error message we can decide, that this is a non-critical error. It's possible to ignore this
error. So we will set this job manually to the status finished with the command
../bin/scheduler -d OPS2SAPRACADM_imp_101104_121841
The scheduler will update the current status of the migration.
If you have to run a job once again you can use the command ../bin/scheduler -r. So if you use the
command
../bin/scheduler -r OPS2SAPRACADM_imp_101104_121841
the scheduler will execute any defined undo operation and will then run the failed script once again
instead.

© Oracle SAP Solution Center Walldorf Page 38 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

7.12 Changing the scheduler run-time parameters


We have started our migration with a set of predefined scheduler parameters. When running an O2O
migration it will be necessary to adjust the current migration settings dynamically during the
runtime of the migration. The configuration of the scheduler parameters is carried out in the file
“O2O/etc/scheduler_conf.txt”. This file is read by the scheduler regularly (by default every 15
seconds). You can define in this file for the different script types the number of jobs, which should
be executed in parallel.
If a new parameter setting is recognized, the scheduler will log this to the bscheduler.log file.
Depending on the change the scheduler will perform the following action:
• If the parameter value is increased, the scheduler will start as many jobs as necessary, to
fulfill the new parameter setting
• If the parameter value is decreased, the scheduler will not start any new job of this type,
until there are less running jobs than defined
To change a parameter it's sufficient to change the value in the configuration file, while the
scheduler is up and running. The scheduler will pick up the change and, if a parameter is increased,
the accordant number of jobs are additionally started, as shown in the next example. If a parameter
is decreased, you will see the change in the bscheduler.log file, but no running job is stopped.
14:22:53 stephan StAtUs: Run/Wait/ToDo: imp=0/0/0 tab=1/7 idx=0/0/14 verisrc=0/0/8 veritrg=0/0/8
stephan StAtUs: current number of failed/suspended jobs: 2 / 0

04Nov2010 bscheduler PID: 23781 running on host: stephan


15:45:44 info: Parameter change: new: NUM_JOBS_TAB=2 old: 1

15:45:44 info: (664) V102_tbl_VST0006_101104_121841.sh (size 1412) started, PID=00009151

15:45:44 stephan StAtUs: Run/Wait/ToDo: imp=0/0/0 tab=2/6 idx=0/0/14 verisrc=0/0/8 veritrg=0/0/8


stephan StAtUs: current number of failed/suspended jobs: 2 / 0

7.12.1 Scheduler sleep time


For now the following scheduler parameter are for interest. There are more, which we will explain
later, when discussing advanced migration scenarios.
• SLEEPTIME=15 time in seconds the scheduler wakes up
This parameter is the sleep time of the scheduler. After each check the scheduler waits this amount
of time, before it checks once again the running jobs and possible parameter changes. Normally 15
Seconds is a good choice. If you run a migration with a large number of scripts, it might be useful to
use a smaller value (e.g. 5 seconds) for this parameter. The smallest possible value is 0.

7.12.2 Parameters which are related to export/ import


• NUM_JOBS_EXPTAB=1 number of export tables (..._exp_tab_D010_... +
OPS..._exp_)
• NUM_JOBS_EXP=1 number of parallel exports in total (_exp0nnn_ packages +
_exp_tab_D010_ tables)

© Oracle SAP Solution Center Walldorf Page 39 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• NUM_JOBS_COP=1 number of parallel copy jobs from source to target for


copying export/ datapump dumpfiles
• NUM_JOBS_IMP=1 number of parallel imports in total
These parameters influence the number of jobs used to run export/ import related scripts. The
calculation is done with the following approach:
The number of parallel exports = NUM_JOBS_EXP – NUM_JOBS_EXPTAB
There is a special handling for tables to export because they can become very large. The reason to
have a special handling of tables to export, is the limitation of the “Long” variable type in Oracle
PL/SQL (please refer to chapter 9.11)
So at the beginning of a migration, the number of running exports can be smaller than the value
defined for the parameter NUM_JOBS_EXP. As soon as the number of exported tables are
completed, the number of normal exports will increase. In opposite to the export the import doesn't
have a differentiation between table imports and normal import. Both types are handled in the same
way.
Note: We have already discussed (chapter 7.6 ) that it's not very helpful to run too many export jobs
in parallel at the beginning of a migration. Please remember, that only a very small amount of the
data is processed by export/ import. So you should run not more than 2 export jobs in parallel. For
the imports even half of the number of export jobs will be sufficient. Don't try to finish the exports
as fast as possible.

7.12.3 Parameters related to tables


• NUM_JOBS_SMALLTAB=1 number of "small" table jobs (excluding tblcopy)
• NUM_JOBS_TAB=1 number of all table methods in total (includes ctas,
ctaslob, ctas_part, tblcopy, pl/sql, small tables)
• NUM_JOBS_RAWTAB=50 number of parallel tblcopy (pl/sql) jobs
• NUM_JOBS_LOBTAB=50 number of ctaslob jobs
• NUM_JOBS_IDP=1 number of import datapump jobs
Before changing the number of the different table jobs, you have to understand the architecture of
the scheduling process for table- and index jobs.
The most important aspect for a migration is to run it as fast as possible. To achieve this goal, we
have to find a good mixture of the different sized migration jobs. Normally we will find a few very
large tables, a larger number of midsized tables and a huge number of very small tables. So on the
one hand, we need to run the largest tables first, to avoid running out of migration time. But if we
would wait for the largest objects to complete, before starting the small tables, we have completed
after a few hours only a few tables (jobs) and have to run all the small scripts in the remaining time.
Please keep in mind, that the table creation is only the first step of a object migration chain. After
the table is copied we have to create the indexes, count the rows on source and target and to import
the objects statistics. So we have to use our migration time in an optimal way. The scheduler is

© Oracle SAP Solution Center Walldorf Page 40 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

therefore working on a “Top-down” AND a “Down-Top” approach in parallel on tables and indexes.
The number of the different table jobs are calculated with the following formula:
• The parameter “NUM_JOBS_TAB” defines the maximum number of all table related jobs
• The parameter “NUM_JOBS_SMALLTAB” defines the number of jobs handling small
tables. For small tables the scheduler will start at the end of our table list, which contains all
names of tables to migrate ordered by their size. So the scheduler will start here with the
smallest table, going up to larger tables step by step.
• The number of jobs for large tables is the result of the calculation:

number of large table jobs = NUM_JOBS_TAB – NUM_JOBS_SMALLTAB


• The parameters NUM_JOBS_RAWTAB (for PL/SQL scripts) and NUM_JOBS_LOBTAB
(for LOB-tables) are still for history reasons. In the past they limited the maximum number
of PL/SQL and/or LOB tables running in parallel. We recommend to use the above values
for both parameters.
Note: Please remember that the scheduler is prompting you for a “Boost” factor for LOB
and LONG objects (RAW_TAB_BOOST and LOB_TAB_BOOST). These “Boost” factors
will move the tables upside to the end of the list holding the largest tables. We have
introduced these “Boost factors”, because LOB and Pl/SQL scripts can run significantly
longer than normal CTAS jobs. So the “Boost” factor allows you to influence the virtual
table size, forcing the scheduler to execute PL/SQL and LOB scripts earlier in the
migration.
• The parameter NUM_JOBS_IDP controls the number of datapump jobs, if there are any.
Datapump can be used in special migration scenarios instead of CTAS or PL/SQL scripts.
Note: The needed package parameters are explained in chapter 9.13
Using this approach allows us to run a lot of small table scripts, while the large tables are in
transfer. This will reduce the overall migration time significantly, because most of the jobs are
finished, when the largest tables are completed. Please keep in mind that once a large table is
completed the consecutive jobs (index creation) will allocate a significant amount of the resources
on the target system.

7.12.3.1 Hints for setting table related scheduler parameters

We can give only a few rules about the parameter settings. Discussions about the setting of these
parameters will be an important part of the training. So here are some rules of thumb:
• Start the migration always with a setting of “1” for NUM_JOBS_TAB and
NUM_JOBS_SMALLTAB. This will avoid a high number of failed jobs at the beginning of
the migration, if something isn't set up correctly.
• Increase the number of jobs carefully. Increase NUM_JOBS_TAB only in steps of 5.
Increase NUM_JOBS_SMALLTAB in parallel.
• Use approx. 1/3 of the available jobs for large tables, the other ones for small tables.

© Oracle SAP Solution Center Walldorf Page 41 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• If you have increase the number of jobs wait for at least 10 minutes before increasing the job
number once again. Be patient. The system takes some time to turn into a stable state.
• When monitoring the load take into account, that a single CPU can handle up to 5 CTAS
jobs but only 2 PL/SQL jobs.
• Monitor the network throughput and the growing of the target database. If the network
throughput doesn't increase any more, when starting new jobs, you have either reached the
maximum network throughput,or the IO capacities on the source system are exhausted. In
this case it makes no sense to start more and more scripts. It's rather more likely, that the
overall throughput of the migration decreases.

7.12.4 Parameters related to index creation


• SMALLIDX=3 number of small indexes
• NUM_JOBS_IDX=4 number of parallel index creations in total
For the jobs controlling the index creation the scheduler uses the same approach for starting the
jobs, as for the tables. The parameter NUM_JOBS_IDX controls the maximum number of indexes
which can be created in parallel. The parameter SMALLIDX is the number of small indexes
running in parallel. So the number of indexes for the larger tables is calculated by:
Number of large index jobs = NUM_JOBS_IDX – SMALLIDX
In opposite to the table jobs, where we have a fixed list of objects to process, the index jobs are
dependent from the successful completion of their table jobs. So when checking the number of
available jobs, the scheduler will estimate the number of executable index scripts. From these
available scripts the largest and smallest index is defined.

7.12.4.1 Hints for setting index related scheduler parameters

Here are the rules of thumb for the index related parameters:
• Start at the beginning of the migration with a setting of “1” for both parameters
• Increase the settings of both parameters in steps by 2
• Use a ratio of 4 / 1 (small indexes vs. large indexes) for the settings
• At the beginning of a migration only a few indexes will be created (the indexes from the
small tables finishing rapidly). This picture will change, if larger tables are finished and the
index creation takes much more time. So you have to watch the index related scheduler
information in the status bar (idx=1/0/13). If the index queue increases (2nd value), you
should consider to increase the number of jobs to create indexes.
• Have a look on the system resources (especially memory and CPU consumption). Don't
overload the system with too many index jobs. Swapping and paging activity must be
avoided.
• The scheduler will not execute a index creation script, if the database is short on parallel
query (PQ) processes. Especially when indexes on large tables are created, the database can

© Oracle SAP Solution Center Walldorf Page 42 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

run out of these processes. Before starting an index creation script the scheduler will check
the number of available PQ processes. If there are not enough available PQ processes, the
scheduler will delay the execution of this script and will start an index creation script
needing less PQ processes instead. If there is no index script which doesn't use more PQ
servers than available, the scheduler will not start the scripts. So if you recognize that there
are indexes waiting in the queue, but no one is started, it may be that your system is
exhausted in respect to the PQ processes. In this case increase the numbe rof available PQ
processes by specifying the database parameter PARALLEL_MAX_SERVERS to a higher
value.

7.12.5 Parameters related to table checks


• NUM_JOBS_VERISRC=1 number of parallel jobs for "select count(*)..." on source
• NUM_JOBS_VERITRG=1 number of parallel jobs for "select count(*)..." on target
• NUM_JOBS_STAT=1 number of parallel jobs for importing table statistics
The above listed parameters control the jobs verifying the migration by counting the rows in the
source and the target tables.
You can easily change the above listed parameters during the migration without stopping/ restarting
the scheduler. In the next example the number of table scripts is increased from 1 => 2. The
scheduler will print the new value and immediately a new table job is started. Normally these
parameters are not very critical. The verify jobs as well as the statistics import finish normally very
fast. The verify jobs are needed at the very end of the migration, if all scripts are finished, to verify
the correctness of the migration. The statistics are needed at the very end of all operations. So there
is no need to run a high number of these script types during the migration. There are also no
dependencies between these parameters.

7.12.5.1 Hints for setting verify and statistics related scheduler parameters

Please keep in mind, that the verify jobs will perform only read operation on source and target (on
the table indexes). These read operations are will apply additional read load on the systems, which
should be avoided at the beginning of a migration. So our rules of thumb for these parameters are:
• Start at the beginning of the migration with a setting of “1” for these parameters
• Leave this job number unchanged until you notice a significant queue for these job types.
It's more likely that this will happen on the source database than on the target database
• Increase the parameters in steps of “1” for each parameter independently.

7.13 Check phase of the migration


Before the schedule can proceed to the post-migration steps, all script of the perform-migration
phase must be completed, either because they were executed successfully, or because they were set
to completed manually with the scheduler -d option.
The first steps are now to check the completeness of the migration in respect to tables and indexes.
We expect that ALL tables and ALL indexes are now present in the target database. So the

© Oracle SAP Solution Center Walldorf Page 43 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

scheduler runs now for each user an own script, which compares the table and index names present
in this particular schema on the source and the target database.
14:28:04 stephan StAtUs: Run/Wait/ToDo: imp=0/0/0
stephan StAtUs: current number of failed/suspended jobs: 0 / 0

14:28:04 info: Perform transition phase completed.

14:28:04 info: Going ahead to phase: check

14:28:05 info: (1) O2O7OGG_check_tab_ind_101109_092120.sh started, PID=00010860

...

14:28:05 info: (1) TST_check_tab_ind_101109_092120.sh started, PID=00011133

14:28:05 stephan StAtUs: Run/Wait/ToDo: check=7/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

14:28:14 info: (4) O2O7OGG_check_tab_ind_101109_092120.sh finished successfully.

..

14:28:54 info: (23) SAPRAC_check_tab_ind_101109_092120.sh finished successfully.

14:28:54 stephan StAtUs: Run/Wait/ToDo: check=0/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

14:28:54 info: Transition check phase completed.


Please review the result logfile(s):
O2O7OGG_check_tab_ind_101109_092120.log
OPS2SAPRACADM_check_tab_ind_101109_092120.log
OPS7V902ADM_check_tab_ind_101109_092120.log
PUBLIC_check_tab_ind_101109_092120.log
SAPR3_check_tab_ind_101109_092120.log
SAPRAC_check_tab_ind_101109_092120.log
TST_check_tab_ind_101109_092120.log

14:28:54 info: Post transition phase can be started if check results are 100% OK.

14:28:54 info: Going ahead to phase: post


To release all post jobs, please enter:
../bin/scheduler
and it will recognize that post phase has to be done right now!

14:28:55 stephan StAtUs: Run/Wait/ToDo: post=0/0/15


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

You have to check now the listed files mentioned in the scheduler output for possible errors or
deviations. The scheduler will stop any activity and ends the background processing. You to restart
the scheduler, once you are completed with the checks.
Note: With Oracle 11g a new feature named “deferred segment creation” was introduced. This
means, that a table or index will only allocate the first extent, if at least one row is stored in the
table. If the table is completely empty, only the Oracle data dictionary information is in place. You
will not find the object in DBA_SEGMENTS. This is implicates for the migration the following
behavior for migration between Oracle 9i/ 10g and Oracle 11g:
• When migrating from 9i or 10g to Oracle 11g, all tables and indexes are picked up from
“DBA_SEGMENTS”. But on the target (11g) you will only find a minor number in
“DBA_SEGMENTS”. The majority of the tables are empty, will not allocate an extent, and
can't be find therefore in “DBA_SEGMENTS”. The package will therefore run the check in
this combination on “DBA_TABLES” instead of “DBA_SEGMENTS”

© Oracle SAP Solution Center Walldorf Page 44 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• When migrating between Oracle 11g, you will notice a significant lower number of export/
import scripts. The export will only include those tables, having allocated at least one extent
in “DBA_SEGMENTS”. The check is running once again running against
“DBA_SEGMENTS”
Imporant hint!!!!
This is the last chance to fix any migration error. At this point we haven't reached the point of
no return. There would be still time to re-run failed jobs, if needed. If you proceed and there
are missing objects, you will be in big trouble....

7.14 Post migration steps


Once you have decided that all objects were created successfully, you can restart the scheduler as
mentioned in the output above.
09Nov2010 bscheduler PID: 6639 running on host: stephan
16:45:47 info: (3843) SAPRAC_addfkc_101109_092120.sh started, PID=00004746

16:45:47 stephan StAtUs: Run/Wait/ToDo: post=1/14/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

16:45:51 info: (3845) SAPRAC_addfkc_101109_092120.sh finished successfully.

16:45:51 info: (3845) V102_statschema_101109_092120.sh started, PID=00005137

16:45:51 info: (3845) SAPRAC_impstruct1_101109_092120.sh started, PID=00005156

16:45:51 stephan StAtUs: Run/Wait/ToDo: post=2/12/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

16:45:59 stephan StAtUs: Run/Wait/ToDo: post=1/12/0


stephan StAtUs: current number of failed/suspended jobs: 1 / 0

16:50:52 info: (3983) SAPRAC_impstruct1_101109_092120.sh failed.


Could not find "terminated successfully without warnings."
take a look into ../LOG/SAPRAC_impstruct1_101109_092120.out
or ../LOG/SAPRAC_impstruct1_101109_092120.err

Now we will start to create all missing objects on the target database, by performing a structural
import for all migrated users. This is the point of no return . Any missing object will be now
created. If a table is missing, it will be created empty. If an index of a large table is missing, the
structural import can take hours. Once this step is completed, there is no way anymore to verify the
correctness of the migration on object level.
Sometimes it can happen, that there are some errors during the structural import, especially during
the creation of view definitions:
IMP-00041: Warning: object created with compilation warnings
"CREATE FORCE VIEW "SAPRAC"."/SAPTRX/V_DD03" ("TRK_"
"OBJ_TYPE","TABLEDEF","TABNAME","FIELDNAME","AS4LOCAL","DDLANGUAGE","DDTEXT""
") AS "
"SELECT T0004."TRK_OBJ_TYPE", T0004."TABLEDEF", T0001."TABNAME", T0001."FIEL"
"DNAME", T0001."AS4LOCAL", T0003."DDLANGUAGE", T0003."DDTEXT" FROM "DD03L" "
"T0001, "DD04L" T0002, "DD04T" T0003, "/SAPTRX/AOALLTAB" T0004 WHERE T0004.""
"DDICDEF" = T0001."TABNAME" AND T0002."ROLLNAME" = T0001."ROLLNAME" AND T000"
"2."ROLLNAME" = T0003."ROLLNAME""

It's normally safe to ignore these warning. Especially in SAP BW systems you can get a lot of these
errors because very often the underlying tables of a view have changed their definition. So if you hit
such an error you can set the job manually to done, with the -d option of the scheduler. Please
remember that the scheduler have to execute all scrips in the post phase in serial. So if one script

© Oracle SAP Solution Center Walldorf Page 45 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

fails. The scheduler will not execute any of the next scripts in the chain. Once the structural import
is marked as successful, the scheduler will start the next scripts.
At this point the final checks will be performed. Now source and target database objects are
compared on a per name level in the migrated database users.
One part of this phase is also to grant the final database privileges to the users. Furthermore all user
password are reset to the source database value.

17:09:38 info: (4504) V102_statschema_101109_092120.sh started, PID=00008528

17:09:38 stephan StAtUs: Run/Wait/ToDo: post=1/12/0


stephan StAtUs: current number of failed/suspended jobs: 1 / 0

17:15:46 info: (4671) V102_statschema_101109_092120.sh finished successfully.

...
17:27:48 info: (5008) O2O7OGG_check_all_101109_092120.sh started, PID=00028631

17:27:48 info: (5008) OPS2SAPRACADM_check_all_101109_092120.sh started, PID=00028651

.....

17:28:00 info: (5012) SAPR3_check_all_101109_092120.sh finished successfully.

17:28:00 info: (5012) TST_check_all_101109_092120.sh finished successfully.

..

17:28:17 info: (5020) SAPRAC_check_all_101109_092120.sh finished successfully.

17:28:17 info: (5020) V102_rm_pmig_101109_092120.sh started, PID=00031920

17:28:17 stephan StAtUs: Run/Wait/ToDo: post=1/3/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

17:28:19 info: (5021) V102_rm_pmig_101109_092120.sh finished successfully.

..

17:33:25 info: (5160) V102_cr_pw_101109_092120.sh started, PID=00018971

09Nov2010 bscheduler PID: 6639 running on host: stephan


18:01:46 stephan StAtUs: Run/Wait/ToDo: post=0/0/0
stephan StAtUs: current number of failed/suspended jobs: 0 / 0

18:01:46 info: Transition post phase completed.

18:01:46 info: Going ahead to phase: final

7.15 Final migration steps


At the end we remove tmporary migration objects (e.g. the database link to the old database), create
the missing log file member and set the database to the archive log mode.

18:01:47 info: (1) V102_drop_link_101109_092120.sh started, PID=00013328

18:01:47 stephan StAtUs: Run/Wait/ToDo: final=1/2


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

18:01:49 info: (2) V102_drop_link_101109_092120.sh finished successfully.

18:01:49 info: (2) V102_addlog_101109_092120.sh started, PID=00013589

18:01:49 stephan StAtUs: Run/Wait/ToDo: final=1/1


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

© Oracle SAP Solution Center Walldorf Page 46 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

18:02:12 info: (11) V102_addlog_101109_092120.sh finished successfully.

18:02:12 info: (11) V102_sw_arch_mode_101109_092120.sh started, PID=00014439

18:02:12 stephan StAtUs: Run/Wait/ToDo: final=1/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

18:02:36 info: (22) V102_sw_arch_mode_101109_092120.sh finished successfully.

18:02:37 stephan StAtUs: Run/Wait/ToDo: final=0/0


stephan StAtUs: current number of failed/suspended jobs: 0 / 0

18:02:37 info: Congratulations. Transition/Reorganisation completely done!


[bscheduler exiting]

Once the last step is completed, the scheduler will stop processing. Now you are ready to start any
post-migration activity which might be needed to complete the migration. For SAP this would be
normally the typical post-migration tasks needed for SAP systems.
At this point you finished the migration. The database is up and running on the target system
and you can now start you application on the target machine, to complete the migration.

7.16 Creating migration evidence


Once the migration is completed customer wants to have some evidence about the performed
migration. Especially if this is a validated system (typically at pharmacy), customer is in charge to
proof the correctness of the migration. The O2O method supports you with some helpful scripts, to
create a summary about the current migration. To get an overview about the migration finding run
the script “../bin/verify” in the O2O/OUTPUT directory. This script will collect all migration
relevant information from the different check scripts executed. The collected information is then
used to create an overview in different files. For each check two files are created. One file showing
the results from the compare and which only contains data, where you don't have differences (e.g.
verify_check_all_complete.log ). This file shows for each table the row counts on source and target
in one line. So it's very easy to demonstrate the correct count for each table. The counterpart of this
file is found in “verify_check_all_please_check.log”, showing all differences found during the
comparison. Once again, if there is a mismatch between source and target system, you will find the
row counts for each table in one line.
Note: The files with the name part “please_check” will always be created. These files should be
empty. Please safe these files, although they are empty. That's the proof, that th emigration was
finished without serious errors.
Important hint!!!!
When migration SAP systems you have to ensure that no brconnect or brspace proces is
becoming activ on the target or source system during the migration! These processes will
connect to the database and will change the row counts in some SAP basis tables.
You can use these files to proof the correctness of the migration. The following files will be created:
• verify_check_all_complete.log
• verify_check_all_please_check.log
• verify_check_tab_ind_complete.log

© Oracle SAP Solution Center Walldorf Page 47 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• verify_check_tab_ind_please_check.log
• verify_expimp_row_counts_complete.log
• verify_expimp_row_counts_please_check.log
• verify_expimp_row_counts_please_check_counts.log
• verify_row_counts_complete.log
• verify_row_counts_please_check.log
• verify_row_counts_please_check_counts.log
When you have performed a database upgrade during the migration, it's very likely that you will
find errors for the database build-in schema “PUBLIC” for example:
PUBLIC_check_all_101109_092120.log Missing objects on the source: PUBLIC.ALL_APPLY_CHANGE_HANDLERS,
SYNONYM

PUBLIC_check_all_101109_092120.log Missing objects on the source: PUBLIC.ALL_ASSEMBLIES, SYNONYM

PUBLIC_check_all_101109_092120.log Missing objects on the source: PUBLIC.ALL_CHANGE_PROPAGATIONS,


SYNONYM

PUBLIC_check_all_101109_092120.log Missing objects on the source:


PUBLIC.ALL_CHANGE_PROPAGATION_SETS, SYNONYM

PUBLIC_check_all_101109_092120.log Missing objects on the source: PUBLIC.ALL_CHANGE_SETS, SYNONYM

PUBLIC_check_all_101109_092120.log Missing objects on the source: PUBLIC.ALL_CHANGE_SOURCES,


SYNONYM

PUBLIC_check_all_101109_092120.log Missing objects on the source: PUBLIC.ALL_CHANGE_TABLES, SYNONYM

These errors are quite normal because of the database upgrade.

7.17 Preparing customer documentation


If customer is asking for some documentation about the performed migration, which can also be
used in front of auditors we recommend to prepare the following documentation:
• Archive all files in the O2O/OUTPUT directory, with exception of the export dump files e.g
with a tar command (tar -cf OUTPUT_timestamp.tar OUTPUT)
• Archive all files in the O2O/INPUT directory (tar -cf INPUT_timestamp.tar INPUT)
• Create a short summary about issues during the migration. Especially if you have set jobs
manually to completed, give customer a short explanation about it
• Create a short summary about the findings in the above discussed files prepared by the
verify job. If there are any entries in the error files, give a short explanation for it.
• If you want to make it audit aware, burn all this information to a CD

© Oracle SAP Solution Center Walldorf Page 48 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

8 Other scheduler parameters


There are some other scheduler parameters you can use during the migration. If you run the
command ../bin/scheduler -h you will see a complete list of parameters listed in the example below.
-h # help / usage / show versions
-v # help / usage / show versions
-k # kill (= smooth stop of) background scheduler
-K # ask to kill/stop bscheduler if running
-C # Convert/Create transition controlfile
-c # perform consistency check on control files
-t * sorry, not uptodate! * # test connectivity via tnsping, rsh, ...
# requires ../etc/scheduler_conf.txt to be uptodate
-T # send test email or SMS, requires
../etc/alert_cont.txt to be uptodate
-s # skip the next phase (keyword PERFORM no longer
supported!!!)
phases that can be skipped are: PREPARE TRANSITION
PREPARE STREAMS
START STREAMS
-e # switch to EXPORT_ONLY mode and back again
# Note: trans.ctl will be exchanged!
-f # display failed entries in any ctl file
-w # display working entries in any ctl file
-o # show (small) overview of done/failed and todo jobs
-O # show (big) overview including end time estimations
-m <interval> # monitor progress of whole transition
<interval> in seconds 60<= interval <= 300
-M <interval> # monitor progress with big overview
-p <ctlfile1> <ctlfile2> ... # print formatted contents of control files
<ctlfile(n)> can be any abbreviation (no wildcards please) to identify
one or several control files, e.g: tab => table control file
t => table and trans ctl
-r <object1> <object2> ... # reset FAIL status to ToDo | ExpD | TabD
-d <object1> <object2> ... # set FAIL status to Done | ExpP
<object(n)> can be any abbreviation (no wildcards please) to identify
FAILed (!!!) entries within one or more control files, e.g.:
imp => any import job (if status isn't FAIL - nothing happens)
exp0001 => means export job 1
exp0 => any export job 0001...0n
-x <object1> <object2> ... # execute the given job(s), independantly on their
current status;
execution will be done in foreground, no dependancies will be taken care of.
<object(n)> must be unique, if multiple entries will be found, an error
message is given and nothing is being done.

9 Different migration scenario


After the introduction to the PL/SQL package usage and the scheduling software, we will now
discuss other different migration scenarios and migration options which are available with the O2O
migration method. You can use the available options in any combination. If an option ar feature
can't be used you will information about it in the package log file, written to the O2O/LOG
directory on the source system. Please note, that some features are only available, if you are using
Oracle 11g on target.
You will find normally no differences running migrations with using different options. The package
will create more or less the same scripts. The scheduler is adapted to the different options and will
automatically perform the correct actions to run the migration. You can use in all migrations the
same scheduler commands, which makes it for you more easy to use the O2O service.

© Oracle SAP Solution Center Walldorf Page 49 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

9.1 Using local export


Another migration scenario you will find quite often, is the combination of different Oracle
database versions on source and target system. Having different Oracle version, has the
consequence that you can't use the export utility from the target release, running the newer version.
If you run an export e.g. from Oracle 11g against an Oracle version 10.2 you get error messages like
ORA-0942 or ORA-0904. These error messages indicate, that the export can't find some internal
database views, or that the database is using a different view definition than expected.
So in this case we have to start the export utility on the source machine, instead running the export
on the target system. To adapt our migration to this combination, we have to change a single
package parameter. Specify for the next run “REXP => FALSE”. This will force the package to use
a different syntax in the export files.
Note: Exports of Oracle 10.2.0.4 and 10.2.0.2 are also incompatible. So you have to set for this
combination the parameter also to “FALSE”
During the check phase the scheduler will recognize this configuration. So the scheduler will try to
run the exports on the source system, instead of executing them on the target system. To enable the
remote execution (on the source host) you have to modify the following parameters in the
scheduler configuration file O2O/etc/scheduler_conf.txt:
• DMP_COPY_METHOD You have to specify this parameter, if you are not using a NFS
shared directory for source and target system. If you are using
NFS the dumpfiles will be available on the target system too.
If you are using NFS set the value of this parameter to NONE.
Otherwise choose the copy command, which should be used
by the scheduler. Supported command are rcp and scp.
Important hint!!!
Do not use a NFS mounted directory for the export. From
our experience it's likely that the export dumpfiles can
become corrupted. You will notice that, when you are
running the import. So for the export specify a local file
system.
• REMOTE_SHELL_CMD The scheduler has to create a session on the source system, to
run the export. You have to specify in this parameter, which
kind of remote shell should be used. Supported commands are
rsh, ssh, remsh
Note: You have to configure the remote shell command to
connect to the source system without a password prompt.
Please refer back to the OS documentation for the setup
• SOURCE_HOST Specify the host name or the IP-address of the source system
• SOURCE_USER User name of the OS user you want to connect to on the
source system. This user must be a valid OS user. Normally
you will use the oracle user of the source database.
• SOURCE_DIR Path definition of the O2O/OUTPUT directory on the source.
Please specify the absolutely path definition for this

© Oracle SAP Solution Center Walldorf Page 50 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

parameter. The scheduler will change to this directory, before


starting the export files. So we expect here to find the shell
scripts generated by the package for the export.
• SOURCE_ENV When connecting to the source system we have to setup the
correct environment, especially the variables
ORACLE_HOME and ORACLE_SID must be specified. In
SAP systems this is normally automatically done by using the
SAP predefined scripts. For other applications it's common
that different application using a shared ORACLE_HOME,
separated by the environment variable ORACLE_SID. To
pick up the correct environment when logging on, you can
specify with this variable a file name, the scheduler will
execute on the source system, to setup the correct
environment.
Note: In SAP environments you will always find two different
login scripts. One is used for the c-shell, which is most
common in SAP. The other one is used for setting up the ksh-
shell, which is normally not used. Very often this file is also
not maintained by the customer. So you can run into the
situation, that the scheduler will fail to execute the scripts on
the source system because the ksh-shell login script is out of
date (scheduler will use ksh for running)
Once the setup is finished you can run the migration quite normally. There are no other differences
to the method and steps executed in the first migration. You should perform some test before
starting the real migration. The most important check is to verify the remote connection.
• Is it really working without prompting for a password?
• Is the environment on the source system defined correctly?

9.2 Changing the Oracle release


With O2O method it's possible to perform a database upgrade as part of the migration. The O2O
method supports a direct upgrade between two major releases (8.1.7.4 => 10g, 9.2.0.8 => 11.2). To
use the advanced features available in the new Oracle release and to use the correct syntax in the
generated SQL-scripts, it's necessary, to specify the Oracle target release for the package run. So
you have to set the parameter “DB_TARGET_RELEASE” to the Oracle release installed on the
target machine.
Currently supported are '10.1', '10.2', '11.1', '11.2' e.g. DB_TARGET_RELEASE => '11.2'
The default value for this parameter is '10.2'
Note: If this parameter isn't specified correctly, you can't make use of all available features. Within
the package there are a number of checks to validate the used parameters. So even if you have table
compression enabled, it will not be use for SAP R/3, if DB_TARGET_RELEASE isn't at least set to
11.1. The same applies for SecureFiles.

© Oracle SAP Solution Center Walldorf Page 51 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

9.3 Changing the database <SID>


To create the target database with a different <SID> than the source database you have to specify
the package parameter SID => '<SID>'
The package will adjust automatically all paths to the new <SID>. If the parameter is not specified,
the database <SID> from the source database is used.
Important hint: For SAP systems we don't change the SAP administrative users (OPS$ users)
anymore. So if you want to change the SID of the target system permanently, you have to define the
user name change by your own in the file “change_user.conf”, as described in the next chapter.

9.4 Changing schema names


As part of the migration you can change one or more schema users. To use this feature you have to
perform the following steps:
• specify the package parameter CHANGE_USER => TRUE
• create in the directory O2O/INPUT the file “change_user.conf”
• list all users in this file, which names should be changed during the migration process.
Upper or lower cases don't matter.
• in this file all users are listed in a distinct line. Old and new user name are separated by a “,”
character example: scott,tiger.
Note: Within the configuration files, all lines starting with a ”#” character are handled as
comments and will be ignored when the package reads the file. The “#” character must be on the
first position of the line.
The package will list all users found to convert in the logfile. Only for the listed users in the log file,
a name conversion is taken place. If the configuration file is not found, the package will write this
issue also to the logfile. So if something went wrong with the naming conversion, please check the
logfile for possible problems with the configuration file or with typos within the file.
There is no change for the scheduling software when using this option.

9.5 Excluding users from the migration


The package has a build-in list of users which can't be migrated. Normally neither users created by
the database itself (e.g. sys, system, dbsnmp etc.) nor any of their objects will be migrated.All other
schemas found on the database will be migrated by default.
If you want to exclude additional users from the migration (maybe there was special tool used in the
past, which is no longer in use), you can advise the package to skip these users and all their objects.
• Set the package parameter “EXCLUDE_USER => TRUE”
• Create in the O2O/INPUT directory a file with the name “exclude_users.conf”
• List all user names not to migrate in this file. Each user must be listed in a separate line

© Oracle SAP Solution Center Walldorf Page 52 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

The package will list all users found in the logfile. As part of the package run the size allocated by
these user is taken into account when the tablespace sizes for the database are calculated. So the
new database is reduced in size automatically by these users.
Note: If a tablespace becomes empty when a specific user doesn't exist any more, this tablespace
will be created nevertheless with the minimum file size (100 MB). If you want the package not to
create these tablespaces, you have to change the tablespace layout manually, as described in
chapter (9.6).
There is no change for the scheduling software when using this option.

9.6 Changing the tablespace layout


You can change the tablespace layout of the database during the migration. The package supports
you either with a full or partial customized layout. So it's possible to merge the contents of multiple
source tablespaces into one target tablespace, to remap a single tablespace to new one or a
combination of both. For SAP systems the package supports the new SAP standard tablespace by
default. To enable tablespace mapping:
• Set the parameter “CHANGE_TBS_LAYOUT => TRUE”
• Create in the input directory a file with the name “tbsmap.conf”.
• List in this file all tablespaces you want to change. Values must be separated by a “,” sign
• Each tablespace mapping must be listed in a separate line. The format of the file is
<old tablespace>,< new tablespace>, e.g. psapbtabd,psapsr3
The source tablespace name must be unique, because you can assign a tablespace only once. The
target tablespace can be used multiple times. The package will calculate the resulting tablespace
sizes after the remapping. The result of the mapping is listed in detail in the package logfile.
For a NON-SAP application you must list each tablespace you want to map in this file. If a
tablespace is not listed it remains unchanged.
For SAP systems the behavior is different. If the package doesn't find the above mentioned file, but
the parameter is set it assumes, that you want to change to the new SAP tablespace layout using a
single tablespace. So the tablespace layout is converted automatically to this layout. The SAP
default tablespace will be named PSAP<SID>. The name of the other tablespaces will be set
accordingly.
The scheduler is executed quite normal by you. But you will notice a difference in handling the
import files. Instead of having one import file and step, the import files are now split into five
separate steps. This is necessary, because the normal import doesn't support a tablespace mapping.
Instead it will try to create the object in the old tablespace and if this is failing in the default
tablespace of the user. This will not work for partitioned tables and for tables having LOB columns.
So the scheduler will break up the import into different steps:
1. From the export dumpfile an index file is created, containing the DDL command for the
tables.
2. The scheduler will replace the old tablespace name with the new one

© Oracle SAP Solution Center Walldorf Page 53 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

3. The table are created empty in the intended tablespace


4. The table data are imported
5. The indexes and constraint definitions are loaded.
So you will see in the scheduler bar line a much higher number of imports, then for the export
scripts. Everything else is running quite normal, with one exception: There is one script
“<SID>upd_sapdd*” which must be executed manually. For SAP systems we have to modify also
some SAP internal tables, to reflect the new tablespace layout. These tables are TAORA, IAORA,
TSORA. The package will create a script named “<SID>upd_sapdd*”, which contains all necessary
command for this step. You have to run it manually. If you omit this step, it's likely that SAP
transport will fail on this system, having the error messages
“ORA-00959: tablespace 'ABCDE' does not exist ”
because SAP tries to create tables and indexes in the old, no longer existing tablespaces.

9.6.1 Specifying a different default tablespace name


As an enhancement to the above described feature you can specify for SAP systems a special
tablespace name, which is used instead of the default name PSAP<SID>. You have to set the
package parameter “TBS_IDENTIFIER => <TABLESPACE NAME>” to enable this feature.
For NON-SAP applications you have to specify a full customized tablespace layout, because there
is no general rule for handling the tablespace transaction.

9.7 Moving single tables to a specific tablespace


It's also possible to move single tables ta a dedicated tablespace. This might be needed if you have
some fast growing tables, you want to separate in their own tablespace. Another approach could be,
to move several tables in dedicated tablespaces, to optimize the tablespace layout. To move one or
more tables:
• Set the package parameter “TABLE_MAPPING => TRUE”
• Create the file tablemap.conf in the O2O/INPUT directory
• List each table you want to move in a separate line. Values must be separated by a “,” sign
• The syntax is <owner>, <table name>,< tablespace table>, <tablespace index>
e.g. SAPR3,MSEG,PSAPMSEG,PSAPMSEG
If the tablespace name from the index tablespace is not listed, the package will use the tablespace
name of the table instead. The package will check every listed table for it's existence, allocated
space, as well as for the dependent objects. If the table is not found, the package will raise an error
for this table and further entries are processed. So in the case of an error (e.g. tables is not placed in
the correct tablespace) please check the package log file for possible error messages.
The package will calculate the resulting tablespace sizes automatically. If you place a table in a non-
existing tablespace, this tablespace will be automatically created on the target system.
There are no differences when running the scheduling software

© Oracle SAP Solution Center Walldorf Page 54 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

9.8 Enabling index compression


As part of the migration it's possible to implement index compression on the target system as
described in SAP note #1109743. Index compression will save approx. 30% - 50% of the index
space on the target system. In SAP systems this will result in an approx. 15% smaller system on the
target. Because index compression is a kind of data normalization (and not a compressing
algorithm) customer will benefit from less IO operations.
To implement index compression set the package parameter “INDEX_COMPRESSION => TRUE”.
As part of the package run, a calculation for the best index compression is taken place on all
indexes, which are handled by separate scripts. The results of the calculation are stored in the table
“O2O_INDEX” for reuse in further package runs.
Note: In the first run the package run takes significantly longer, because the calculation of the indx
compression takes time, especially on the very large tables. So we have seen run-times of more than
12h in the first package run for large systems when index compression is enabled. The runtime is
normalized in consecutive package runs, because the results are stored and reused. If you interrupt
the package run, the index calculation will be started with the index, which was currently under
examination, when the run was stopped.
To shorten the run-time of the needed calculation the package has the capabilities to read in
predefined index compression rates from a file. As soon the package finds a predefined compression
rate for an index, no index analysis is executed. To use this feature perform the following steps:
• for SAP systems create the file sap_indexes.conf in the O2O/INPUT directory
• list in this file the index names and the number of columns to compress. Each index must be
in a single line. Values must be separated by a “,” sign. The package will replace the defined
SAP schema name automatically as the user.
• the syntax is <index name>, < number of columns to compress> e.g. MSEG~0,3
• for NON-SAP systems create the file indexes.conf in the O2O/INPUT directory
• list in this file the owner, index name and the number of columns to compress. Values must
be separated by a “,” sign.
• the syntax is <owner>,<index name>, < number of columns to compress>
e.g. SCOTT,APWDR_PK,1
The definitions are read in with each package run. For your convenience all calculated index
compression rates are saved to the INPUT directory at the end of the package run. For SAP systems
you will find a file named “sap_indexes.conf_<timestamp>, for NON-SAP systems
“indexes.conf_<timestamp>. You can use this file to transfer calculated index compression rates to
other systems, shortening the package run there

9.9 Enabling table compression


Starting with Oracle 11g it's possible to compress the tables on the database to save space. The O2O
package supports you to implement table compression on the target database.

© Oracle SAP Solution Center Walldorf Page 55 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

Note: To use table compression for SAP in R/3 systems, the database target release must be >=
11.2. The only supported compression method for SAP is “Compress For OLTP”.
To enable the table compression on the target perform the following steps:
• set the package parameter “TABLE_COMPRESSION => TRUE”
• create the file “table_compress.conf” in the O2O/INPUT directory
• list in this file the tables you want to compress on the target.
• The general syntax is <owner>,<table name>, <compression method>
The package supports all compression method of Oracle 11.2. The compression method is
specified with the following abbreviations:
B = Basic compression
O = Compress for OLTP
QL = Query low
QH = Query high
AL = Archive low
AH = Archive high
• If the file doesn't exist, the package assumes that you want to compress all tables in the SAP
schema. For NON-SAP systems all tables of all users will become compressed on the target
system.
• The package supports the usage of the wildcard character “*” for the table definition. You
can use all combinations in any order simultaneously. Some examples:
• SAPR3, means all tables in schema SAPR3
• SAPBIW,/BIO/* means all tables starting with the string /BIO/ in the table name
• sapbiw,*FZ123 means all tables having the string "FZ123" at any position
• sapbiw,/BIC/FFIC means the exact table name.
• You can use all combinations in any order simultaneously
Note: For SAP systems the package will exclude all tables from compression as described in SAP
note #1431296. The package will print this information for each table to the log file.

9.10 Automatic interval range partitioning


It's possible to introduce the automatic interval range partitioning for selected objects. This will
allow one to partition tables very easily, without the need to setup a maintenance in respect for
creating new partition, on these tables. To use the feature you have to perform the following steps:
• Set the parameter AUTO_PART_TAB => TRUE
• Edit the file “table_part.conf” in the INPUT directory. List each table in a separate line in
the format <owner>,<table>,<column>,<range definition>
Note: At the moment only numerical interval partitioning is supported

© Oracle SAP Solution Center Walldorf Page 56 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

9.11 Secure files in 11g


In Oracle 11g the data types CLOB and BLOB are replaced with the new data type SecureFile,
which has become the recommended data type (SAP note # 1426979). As part of the migration you
can perform the migration to SecureFiles. For the conversion wen don't need to specify a package
parameter. The conversion is automatically executed on the target database. For this set for the
duration of the migration the database parameter “db_securefiles = always” on the target database.
You have also to ensure that the target tablespace of these tables are created with ASSM enabled.
The package will create all tablespace by default with ASSM enabled.
Oracle will then automatically create all former LOB tables with the SecureFile definition. Once the
migration is completed, remove this parameter from the init.ora file of the database.

9.11.1 Secure files compression


By default the package will also compress all secure files in the target database, if the database
target release is => 11.1. The predefined compression level is “medium”.
If you want to disable secure file compression set the package parameter:
• “ENABLE_LOB_COMPRESSION => FALSE”
Important Hint!!!!
To compress the SecureFile, the table must be created first with an special SQL script.
Normally during the migration we would perform the following steps:
• create the table definition on the target with enabled compression
• load the data with datapump
Unfortunately there seems to be a bug in Oracle datapump (as of 11/2010), which make
datapump unusable when loading to SecureFiles. So there is a temporary workaround
present, which uses Pl/SQL files instead. This workaround allows us to use compression for
SAP tables. For NON-SAP applications we will use a “select into” statement instead. This
workaround will be removed in later releases if the datapump bug is solved.

9.11.2 Long To Lob conversion


As part of the migration it's also possible to perform a LONG => LOB migration. The LONG and
LONG RAW data type are deprecated and were replaced in Oracle 9i by the CLOB/ BLOB data
type. As described in the previous chapter in Oracle 11g the LOB types are replaced by the
SecureFile. The package supports also a LONG => LOB conversion (e.g. when staying on Oracle
10g), or a LONG => SecureFile conversion, when upgrading to Oracle 11g. Which conversion is
taken place, it depend on the current parameters used for the package run:
• If the parameter CONVERT_TO_LOB => TRUE is specified and the target release
(chapter 9.2) is set to '10.2' the LONG column is transformed to LOB
• If the parameter CONVERT_TO_LOB => TRUE is specified and the target release
is set to >= '11.1' the LONG column is transformed to SecureFile

© Oracle SAP Solution Center Walldorf Page 57 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• If the parameter CONVERT_TO_LOB => TRUE is specified and the target release
is set to >= '11.1' and the parameter ENABLE_LOB_COMPRESSION => TRUE is specified
the LONG column is transformed to a compressed SecureFile
Note: For SAP the package check also the installed SAP version on the source machine. If the SAP
kernel version isn't at least 7.00 the long to lob conversion is not performed, because for SAP the
minimum requirement for LOB tables is a kernel >= 7.00 (SAP note #.646681). For NON-SAP
systems this requirement is ignored. So you can perform a LONG to LOB conversion anytime.

9.12 Enabling tablespace encryption


It's possible to enable TDE (transparent tablespace encryption) on the target database, For SAP
systems please check also SAP note #974876 for exact instructions about the system setup. To
enable it in the package execute the following steps:
• Set the parameter DB_TARGET_RELEASE at least to '11.1' or higher
• Set the parameter “TABLESPACE_ENCRYPTION => TRUE”
• Specify the default encryption method with “ENCRYPTION_METHOD => 'AES256'.
Note: At the moment only the encryption methods "3DES168", "AES128", "AES192", or
"AES256" are supported.
• Create the file “tbs_encrypt.conf”in the O2O/INPUT directory
• List in this file all tablespaces you want the database to encrypt.
• The syntax is <tablespace name>[, <encryption method>]. The encryption method is
optional. Instead of listing all tablespaces you can also use the wildcard character “*” in this
file, to encrypt all tablespaces. You can specify for each tablespaces an own encryption
method. If you omit the method, the default encryption method is used instead. If you
specify an unsupported method, the package will also use the default encryption method.
As a result you will find the encryption clause at every tablespaces creation command. Furthermore
the create database script contains an “alter system” command to enable the database encryption on
database level.
Note: You have to prepare the system manually for the usage of TDE. Before you can create the
database, you have to add the wallet key to the create database script.

9.13 Using Oracle DataPump


The package supports the usage of Oracle datapump for the migration So it's possible to run a
database migration in two steps:
1. Create a full database export with datapump
2. Transport these dumpfiles (e.g. via transportable disks)
3. Import these dumpfiles to the new system.

© Oracle SAP Solution Center Walldorf Page 58 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

When using this configuration the O2O behaves like R/3 load, which also creates intermediate
dumpfiles. You can also replace some of the normal CTAS or PL/SQL scripts with datapump. Thi
scan be helpful, if you have a network connection with a high latency. In this case it can be helpful
to use datapump jobs for particular scripts to run a local export to dumpfiles. Especially for LOB
tables this can be a helpful option. The implementation supports export/ import via dumpfiles as
well as the usage of a network link. Also when using datapump all package features are fully
supported. There are no restrictions on this.
It's possible to switch on datapump usage for the different job classes independently. You can use
the following parameters:
• USE_DATAPUMP_FOR_LOB => Switches on datapump usage for LOB tables
• USE_DATAPUMP_FOR_PL => Switches on datapump usage for PL/SQL tables
• USE_DATAPUMP_FOR_CTAS => Switches on datapump usage for CTAS tables
• USE_DATAPUMP_FOR_PART => Switches on datapump usage for partitioned tables
• USE_DATAPUMP_FOR_IOT => Switches on datapump usage for IOT tables
After the general usage of datapump for the particular job class is activated, you have to specify the
size a table must have, before it's migrated with a datapump job. The size is specified in MB. The
default value for all parameters is 200000 MB.
• DP_LOB_MIN_SIZE => Minimum size for LOB tables incl. LOB segment
• DP_PL_MIN_SIZE => Minimum size for PL/SQL tables
• DP_CTAS_MIN_SIZE => Minimum size for CTAS tables
• DP_PART_MIN_SIZE => Minimum size for partitioned tables
If you set the above listed size parameters to the same value as the parameter “MAX_SIZE”
(chapter 7.5), then only datapump jobs will be created by the package.
There are some general parameters, influencing the general setup of the datapump jobs. So it's
possible to use datapump for a direct import using a network link, instead of creating dumpfiles. So
the following datapump related parameters will affect all above listed job classes
• USE_LINK_FOR_DP => Datapump is using network link
• DUMPFILE_DIR => specifies the Oracle directory used for the dumpfiles
• DATAPUMP_FILE_SIZE => size of the datapump dumpfiles in MB
Note: You can't have more than 99 dumpfiles for a single
datapump job. Datapmp will inform you about that, when
file 100 is needed.
The default size of 10000 MB is sufficient to export a
980GB table. If your table is larger than 980 GB specify a
higher value for this parameter.
Internally the package will switch on datapump usage for some migration tasks e.g. when
transforming a LOB table to a compressed secure file.

© Oracle SAP Solution Center Walldorf Page 59 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

9.13.1 Creating a full datapump export


As mentioned earlier you can create a full database dump of the database. To create a full dumpfile
set for the migration either omit the parameter “DB_LINK” or specify it with “DB_LINK=>NULL”
In this case the package assumes that you want to run a local export of the database. All parameters
are set internally automatically to the correct values. The package will create then a script set, which
will dump out the complete migration to datapump end export dumpfiles. This will allow you to run
also an O2O migration between systems, which don't have a suitable network connection between
the systems (e.g. not established, too slow).
For these migrations the package runs remains unchanged. You will find all scripts in the same
directories as before. The main difference is running the scheduler for such a migration. First of all
the scheduler has to run on the source system to create the export! So we have to ensure not to run
any import activity here, otherwise you database can become corrupted.
• Install the scheduler software on the source system as described in chapter 7.2
• Run the scheduler with the command ../bin/scheduler -C to convert the migration scripts
• Run the scheduler with the command ../bin/scheduler -e to set the scheduler to the export
only mode. Don't omit this step. Setting the scheduler to the export only mode, will avoid
that the scheduler is running import activity on the system
• Ensure the there is enough space in the file system the dumpfiles are written to.
• Set the Oracle directory “DUMPFILE_DIR” to the correct location
• Create the link for the O2O/DUMP directory to the same directory as for the datapump
directory. This will ensure, that you have all dumpfiles at the same location
• Run now the scheduler quite normal on the source system. You should not place the
OUTPUT directory on the same transportable medium (e.g. NAS filer) you are using for the
dumpfiles. Because of the heavy write load during the migration you can see a significant
performance decrease for the scheduling process. It's recommended to use a local files
system for the scheduling process. Keep in mind, that you don't need much space here,
because the dumpfiles are created at a different destination.
• Once you are completed run in the directory O2O/check the file “do_export_checks.sh”.
This script will collect the information about the finished export for creating the evidences
later.
• Once you are completed, copy all files from O2O/OUTPUT, O2O/LOG, O2O/DUMP,
O2O/check and DUMPFILE_DIR to the transportable medium. You have to transfer all files
to the target system
• Prepare your target system as described in chapter 7.2.
• Create the database with the prepared scripts.
Note: When using datapump imports from dumpfiles you need a DUMPFILE-DIR directory
on the target as well. The package will create this directory automatically during the

© Oracle SAP Solution Center Walldorf Page 60 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

preparation phase. The path used is the same as on the source system. If you create the
database with own scripts, you have to create this directory manually.
• Copy the O2O/OUTPUT, O2O/LOG, O2O/check to the designated destinations
• Run the command ../bin/scheduler -e to switch scheduler back to normal operations
• Run the command ../bin/scheduler to start the import on the target system.
• When the import is finished, run in the directory O2O/check the script
“do_import_checks_1.sh” to compare the source and the target system on object level. These
checks will now compare the list of the exported objects with a list of the imported objects,
based on the log files written during the export/ import steps. This equates to the check of
the log files created by the check scripts.
Note: The scheduler will use the host name as part of the comparison string. If the host
names are different, you can run the script “do_export_checks.sh” before once again.
• If there are no missing objects you can proceed with the post- and final migration phase
• You can also create a summary about the number of rows exported and imported. This
evaluation is extracted from the log files of all export and import scripts present in the
OUTPUT directory. This applies for normal export/ import as well as for datapump log files.
You can create this summary by running the scripts “show_logs_imp.ksh” and
“show_logs_exp.ksh” in the directory “/O2O/Evaluation”. After this you can compare both
results with the script “aggr.ksh” in the same directory.

9.13.1.1 Errors when using NFS for DataPump on AIX (Metalink# 420582.1)

When you are using a NAS device as destination on AIX for the datapump dumpfiles, the export
may fail with the error messages:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining Scoring Engine options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "<dir path>/full.Wed.dmp"
ORA-27054: NFS file system where the file is created or resides is not mounted with correct
options
Additional information: 6

The dumpfile directory is located on a NFS file system.

9.13.1.1.1 Cause

This seems to be a limitation on AIX platforms. All mount point information is supposed to be
present in the '/etc/filesystems' file located on the system. When mount information is not present
in this file it can lead to errors.
The behavior is described in base bug:
BUG 4719334 NFS FILE SYSTEM WHERE THE FILE IS CREATED OR RESIDES IS NOT
MOUNTED ORA-27054

© Oracle SAP Solution Center Walldorf Page 61 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

closed as 'OS-Vendor problem', specific for AIX platforms.


Another bug:
BUG 4968488 NEED A WAY TO AVOID FATAL ORA-27054 ERRORS FROM SKGFIFI FOR
NFS FILES
Offers a possibility to workaround this.

9.13.1.1.2

Solution: Disallow MOUNT POINT checks with event:


"10298 trace name context forever, level 32"
Event 10298 disables the NFS checks in the 10.2 code
1. SOLUTION
=========
To implement the solution, please execute one the following steps:
1.Don't use NFS to create the files.
Or
2. 2. Add the NFS filesys to '/etc/filesystems'
--> This is also applicable for 11G
Or
3. 3. Edit init.ora, add the line:
event="10298 trace name context forever, level 32"
and bounce the database again, if it already exist so that the new parameter is active.
Then retry the failed operation
4. Run the command: alter system set events='10298 trace name context forever, level 32';
References
NOTE:329942.1 - ORA-27054 while Relocating datafiles to an NFS Mount Point

9.14 Optimizing target database size


The package performs a size calculation on the tablespace as well as on all tables to migrate. For
tablespaces the allocated space and the available free space is taken into account. Tables are
calculated on the base of the allocated space in the Oracle view “DBA_SEGMENTS”. If a lot of
data was deleted from a table, it might by that the table will become much smaller on the target
system. Especially when using index or table compression it's impossible to make any prediction
about the target database size. The package approach is to avoid under any circumstances, to
calculate the target system too small. So we recommend to use the first test migration to ascertain
the final database size on the target.

9.14.1 Using the autoextend feature


The package offers you the possibility to activate the database autoextend feature for the datafiles.
For this set the parameter

© Oracle SAP Solution Center Walldorf Page 62 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• “USE_AUTOEXTEND => TRUE”


When using this feature the package will perform all size calculation quite normal. But instead of
creating all tablespaces and datafiles in their calculated final size, all datafiles will be created with
an initial size of 1 GB only. Datafiles can grow up to their calculated maximum size in steps of
1GB. This has two advantages:
1. The target database can be created much faster, because instead of creating a 2 TB database,
you only have to create 3% - 5% of this size.
2. After the migration you will see the exact size of the database on the target. Most of the
datafiles will fit exactly, so there is no spaces wasted on the file system. This will give you a
good hint about the needed setting for the final migration.

9.14.2 Setting the maximum data file size


You can specify a maximum datafile size, which should be used for the datafiles. The parameter is
specified in MB. This the maximum size for a single datafile. If the calculated size of a datafile is
smaller, the smaller value is used to specify the maximum size for this particular file. The default
value for this parameter is 10240 MB.
• DB_FILE_SIZE => 20480
When using the autoextend feature of the package this parameters defines the maximum size a
datafile can grow up to. A datafile can allocate up to 4 million blocks, so for a 8k blocksize the
maximum datafile size can become 32GB. The package will limit the maximum datafile size to
32GB. A single tablespace can allocate up to 1022 datafiles.
The package doesn't support large datafiles. But you are free to create a fully customized database
by your own if you prefer a different setup.
Note: In respect to performance it's better to use more datafiles, instead of using one very large
one. Especially when using HP-UX you have to use multiple files for the UNDO tablespace and the
most loaded tablespaces. So instead of creating one 20GB datafile for UNDO on HP-UX you
should create five files, using 4GB each instead. Otherwise you can run into Checkpoin not
complete” situations on those systems.

9.14.3 Specifying a minimum data file size


Even if a tablespace is complete or almost empty the package will create it with a minimum files
size of 100MB. If you want to use a different one, e.g. you can set it with this parameter.
• FILE_MIN_SIZE => 1000
All datafile will be created with this minimum size, regardless how much space was allocated on
the source system for the tablespace or if the expected size is below the specified size.

9.15 Optimizing network usage


By default the package uses a single database link for the migration. But the available network
bandwidth can become a bottleneck, if your are planning to run a high-speed migration. You can

© Oracle SAP Solution Center Walldorf Page 63 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

calculate for a single 1GB network connection a transfer rate of approx. 180GB per hour. This is the
amount of tables data which can be copied in one hour. If you need a higher throughput, you have to
use multiple network lines. The package has parameters to use different network adapters for the
available job classes:
• DB_LINK => General database link, which must be specified
• DB_LINK_CTAS => Link used by CTAS jobs
• DB_LINK_LOB => Link used by CTAS having LOB data type jobs
• DB_LINK_PL => Link used by PL/SQL jobs
• DB_LINK_EXP => Link used by export/ import jobs, but not by datapump jobs.
• DB_LINK_PART => Link used by partitioned tables jobs
• DB_LINK_MCP => Link used by MCP jobs (see also chapter10.1)
The values for these parameters are valid network alias entries used in the tnsnames.ora file. You
can use any of these additional database link to distribute the network load on several network
connections. You have to set up your tnsname.ora file on the target and your listener.ora file on the
source to allow multiple network connections.
When using network, not only the network speed is an important criteria, but also the network
latency. For the migration Oracle net functionality is used. Oracle net is a secured protocol using
data acknowledgment and handshaking during the transmission to ensure data consistency. So if a
package is sent from the source, the target has to acknowledge, that the package was received. If
there are any errors, the packet will be retransmitted.
If you your network has a high latency, a lot of retransmissions can be possible

9.16 Configuring Session Data Unit


Under typical database configuration, Oracle Net encapsulates data into buffers the size of the
session data unit (SDU) before sending the data across the network. Oracle Net sends each buffer
when it is filled, flushed, or when an application tries to read data. Adjusting the size of the SDU
buffers relative to the amount of data provided to Oracle Net to send at any one time can improve
performance, network utilization and memory consumption.
The amount of data provided to Oracle Net to send at any one time can be referred to as the
message size. Oracle Net assumes by default that the message size will normally vary between 0
and 2048 bytes, and infrequently, will be larger than 2048. If this assumption is true, then most of
the time, the data will be sent using one SDU buffer. This assumption is why the default value for
the SDU size is 2048.
Consider changing the SDU size when the predominant message size is smaller or larger than 2048.
The SDU size you choose should be 70 bytes larger than the predominant message size, as long as
the maximum SDU size is not exceeded. If the predominant message size plus 70 bytes exceeds the
maximum SDU, then the SDU should be set such that the message size is divided into the smallest
number of equal parts where each part is 70 bytes less than the SDU size.

© Oracle SAP Solution Center Walldorf Page 64 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

The SDU size can range from 512 bytes to 32767 bytes. If the DEFAULT_SDU_SIZE parameter is
not configured in the sqlnet.ora file, then the default SDU for the client and a dedicated server
is 2048 bytes, while for a shared server the default SDU is 32767 bytes.
The actual SDU size used is negotiated between the client and the server at connect time and will be
the smaller of the client and server values. As such, configuring an SDU size different from the
default requires configuring the SDU on both the client and server computers, unless you are using
shared servers, in which case only the client needs to be changed because the shared server defaults
to the maximum value.
For example, if the majority of the messages sent and received by the application are smaller than
8K in size, taking into account about 70 bytes for overhead, setting the SDU to 8K will likely
produce good results. If sufficient memory is available, using the maximum value for the SDU will
minimize the number of system calls and overhead for Oracle Net Services.

9.16.1 Clientside Configuration


To configure the client, set the SDU size in the following places:
• sqlnet.ora File
For global configuration on the client side, configure the DEFAULT_SDU_SIZE parameter
in the sqlnet.ora file: DEFAULT_SDU_SIZE=32767
• Connect Descriptors
For a particular connect descriptor, you can override the current settings in the client side
sqlnet.ora file. In a connect descriptor, you specify the SDU parameter for a description.
sales.us.acme.com=
(DESCRIPTION=
(SDU=11280)
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=sales.us.acme.com))
)
SDU size applies to all Oracle Net protocols.

9.16.2 Serverside Configuration


To configure the database server, set the SDU size in the following places:
Configure the DEFAULT_SDU_SIZE parameter in the sqlnet.ora file:
DEFAULT_SDU_SIZE=32767
• If using dedicated server processes for a database that is registered with the listener through
static configuration in the listener.ora file, you can override the current setting in sqlnet.ora:
SID_LIST_listener_name=
(SID_LIST=
(SID_DESC=
(SDU=8192)
(SID_NAME=sales)))
Note that the smaller value of the SDU size and the value configured for the client will win.

© Oracle SAP Solution Center Walldorf Page 65 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

9.17 Configuring I/O Buffer Space


Reliable network protocols like TCP/IP buffer data into send and receive buffers while sending and
receiving to or from lower and upper layer protocols. The sizes of these buffers affect network
performance, as these buffer sizes influence flow control decisions.
The RECV_BUF_SIZE and SEND_BUF_SIZE parameters specify sizes of socket receive and send
buffers, respectively, associated with an Oracle Net connection.
To ensure the continuous flow of data and better utilization of network bandwidth, specify the I/O
buffer space limit for receive and send operations of sessions with the RECV_BUF_SIZE and
SEND_BUF_SIZE parameters.
For best performance, the size of the send and receive buffers should be set large enough to hold all
of the data that may be sent concurrently on the network connection. For a simple database
connection, this typically maps to the OCI_PREFETCH_MEMORY size.
Setting the SEND_BUF_SIZE and RECV_BUF_SIZE to at least the bandwidth-delay product, will
insure that when large amounts of data are being sent that the network bandwidth will be optimally
utilized.
For example, suppose that the network link between a primary database and a standby database has
a round trip time of 34 ms and a bandwidth of 15 Mbps. The bandwidth-delay product of this
network link is approximately 64KB. The largest message used to transfer redo data between a
primary database and a standby database is 1MB so the optimum value for the send_buf_size and
recv_buf_size parameters in this scenario is therefore 1MB however a setting of at least 64KB
should be sufficient to optimize use of the available bandwidth.
NOTE:
The actual value of the send_buf_size and recv_buf_size parameters may be less than the value
specified either because of limitations in the host operating system or due to memory constraints.
It is important to consider the total number of concurrent connections that your system must
support and the memory resources that are available because the total amount of memory that will
be consumed by these connections will depend on both the number of concurrent connections and
the size of their respective buffers.
For most network protocols, ensure that the RECV_BUF_SIZE parameter at one end of the network
connection, typically at the client, is equal to the value of the SEND_BUF_SIZE parameter at the
other end, typically at the server.
Specify these parameters in the sqlnet.ora file or in the connect descriptor on the client side and the
listener.ora, sqlnet.ora files on the server side.
Note:
Use these parameters with caution as they affect network and system performance. The default
values for these parameters are operating-system specific. Following are the defaults for the
Solaris 2.8 Operating System:

• SEND_BUF_SIZE: 16,384 bytes

© Oracle SAP Solution Center Walldorf Page 66 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• RECV_BUF_SIZE: 24,576 bytes

The default size for both SEND_BUF_SIZE and RECV_BUF_SIZE for Solaris 2.9 is 49,152 bytes.

These parameters are supported for TCP, TCP/IP with SSL, and SDP protocols. Additional
protocols may support these parameters on certain operating systems. Refer to operating-system
specific documentation of Oracle Net for additional information.

9.17.1.1 Determining Bandwidth Delay Product

Bandwidth-delay product is the product of network bandwidth and the round trip time of data going
over the network. The easiest way to determine the round trip time is to use a command such as
ping from one host to another and use the response times returned by ping.
For example, if a network has a bandwidth of 100 Mbps and a round trip time of 5ms, the send and
receive buffers should be at least (100X106) * (5X10-3) bits or approximately 62.5 Kilobytes.
For a better understanding of the relationships among the units and factors involved, refer to the
following equation:
100,000,000 bits 1 byte 5 seconds
---------------- x ------ x --------- = 62,500 bytes
1 second 8 bits 1000

9.17.1.2 Clientside Configuration

To configure the client, set the buffer space size in the following places:
• sqlnet.ora File
For global configuration on the clientside, configure the sqlnet.ora file. Setting just the
“RECV_BUF_SIZE” parameter is typically adequate. If the client is sending large requests,
then also set the SEND_BUF_SIZE.
RECV_BUF_SIZE=11784

9.17.1.3 Serverside Configuration

Because the database server writes data to clients, setting just the SEND_BUF_SIZE parameter at
the serverside is typically adequate. If the database server is receiving large requests, then also set
the RECV_BUF_SIZE parameter.To configure the database server, set the buffer space size in the
listener.ora and in sqlnet.ora file.
In the listener.ora file, you can either specify the buffer space parametersfor a particular protocol
address or for a description.
LISTENER=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)(PORT=1521)
(SEND_BUF_SIZE=11784)
(RECV_BUF_SIZE=11784))
(ADDRESS=(PROTOCOL=ipc)(KEY=extproc)
(SEND_BUF_SIZE=11784)
(RECV_BUF_SIZE=11784)))
LISTENER2=

© Oracle SAP Solution Center Walldorf Page 67 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

(DESCRIPTION=
(SEND_BUF_SIZE=11784)
(RECV_BUF_SIZE=11784)
(ADDRESS=(PROTOCOL=tcp)(HOST=sales-server)(PORT=1521)))

In the sqlnet.ora file you can set the values for all connections:
RECV_BUF_SIZE=65536
SEND_BUF_SIZE=65536

10 Enhanced O2O functionality


Beside the described basic functionality there are more continuative methods available to handle
also more complex migration scenarios. One common challenge is the occurrence of very large
tables on the source system. Before taken the enhanced methods into account, you should calculate
the approx. migration time for the large tables.
For a single job, regardless whether its CTAS, CTAS LOB or PL/SQL you can consider an average
throughput of 20GB/ hour. So a 100 GB table will be copied in 5 hours. If the table is significantly
larger, you may use the MCP approach of the O2O method.

10.1 Using MCP transfer method for large tables


With this method it's possible to copy tables with multiple jobs in parallel from the source system to
the target system. The MCP method can be enabled for each job class separately. Use the following
parameters to enable MCP:
• MCP_DIR => Specifies the Oracle directory for the MCP scripts.
• USE_MCP_CTAS => Enable MCP for CTAS class jobs
• USE_MCP_PL => Enable MCP for PL/SQL class jobs
• USE_MCP_PART => Enable MCP for partitioned class jobs
• PAR_MIN_SIZE_CTAS => Specifies the minimum size a table must have, before
MCP is used for this job class. This includes LOB tables
as well
• PAR_MIN_SIZE_PART => Specifies the minimum size a table must have, before
MCP is used for this job class
• PAR_MIN_SIZE_PL => Specifies the minimum size a table must have, before
MCP is used for this job class
• USE_MCP_TEMP_TABLE => If set to true (Default value), the package will use
temporary tables on the target system, to hold the single
parts of the parallel copy process. If set to false there will
be multiple insert operations on the table. The
recommended method is using temporary tables.
• DB_LINK_MCP => Link used by MCP jobs (see also chapter 9.15)
• PARTITION_VALIDATION => If set to true (Default False)for partitioned tables the
partition contents is validated in the final copy step

© Oracle SAP Solution Center Walldorf Page 68 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

When using MCP the package will create a set of scripts for each table, which is copied with the
MCP method. These scripts are placed in the directory MCP_DIR (we will name this now MCP
directory). Please make sure, that the Oracle directory MCP_DIR is created, before running the
package.
Each table is copied with multiple scripts, whereas each scripts copies only a unique subset of a
single datafile. So you will find as many scripts for a given table, as the number of datafiles used by
the table. If the parameter “USE_MCP_TEMP_TABLE” is set to true, each script will use an own
table on the target for copying data. These “temporary” tables are created in an own tablespace
named “O2O_TEMP”. Each script is using CTAS command, which reduces rollback time. Even if a
scripts has failed, we can simply restart it. The tablespace “O2O_TEMP” can be dropped, once the
migration is completed.
If the parameter is set to false, there will be multiple insert operations on the target table instead.
Normally these insert operations are much slower than using intermediate temporary tables. We
have seen a rate up to 5 GB/ hours for a single insert. But this approach might be for large LOB
tables an alternative, because for LOB tables the final copy steps takes sometimes much longer than
copying the intermediate tables. So you have to run the MCP with a very high parallel degree (> 30)
to get an acceptable throughput.
Using MCP for PL/SQL tables is different. We don't use temporary tables here, but will perform an
insert operation directly into the target table. Each script will commit only at the end of the session,
if there was no error. If a script is failing, we need nearly the same amount of time to rollback as for
the insert before. So be patient before starting any activity.
Important Hint!!!!
Under any circumstances you have to avoid that a script is started twice. This will result in
duplicate rows, making the migration for this table unusable. In worst case you have to repeat
the migration, because there isn't enough time to run the scripts for the failed table once
again.
When all scripts are finished (all table parts are copied), we will copy the table data on the target
system into the final table. Although this an additional copy step, the overall performance is mostly
influenced by the time used to copy the table from the source system to the target system. You can
calculate for each separate script an average throughput of 20 GB/ hour. If you run 10 MCP scripts
of a table in parallel you can copy up to 200 GB/ hour of this table. The final copy step can also
make use of PQ feature, so that you can finish a 300 GB table within 3 hours.
When running MCP for a partitioned table, the final step can avoid any copy operation. For the
partitioned tables we use the command “Exchange partition”, which will simply exchange the
metadata of a temporary table with the matching partition in the table copied. If partition validation
is switched off, this step takes only a few second, whereas with enabled partition validation, it takes
much more time to complete, because all rows within a partition are checked for their correct
location. Because the package copies the data strictly by the partition boundaries, there is no need to
validate the rows at this point, but some customers like to have it, at least for the test migration.
When running MCP for partitioned or PL/SQL tables , there is no need for creating an own
temporary tablespace. The intermediate tables will later become the final partitions. So as part of
the exchange script the intermediate tables is automatically dropped in the script.
Otherwise it's on you to drop the tablespace “O2O_TEMP” once the migration is completed.

© Oracle SAP Solution Center Walldorf Page 69 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

10.1.1 Running MCP migration


Beside the well known directories you have to copy also the O2O/MCP_OUTPUT directory to the
target machine. From the scheduler architecture, you will now run two scheduling processes on the
machine. One is running in the O2O/OUTPUT directory as before, the other one is running the
scripts in the MCP directory. The handling of the scheduler is the same as in the O2O/OUTPUT
directory. So you have to stay in the MCP directory, when issuing any command for the MCP
scheduling.
Before you can start the scheduler in the MCP directory you have to setup some scheduler
parameters:
• MCP_DIR specifies the location of the MCP directory e.g.
“/oradump/CCR/MCP_OUTPUT ”
• NUM_JOBS_XCHG number of parallel jobs for exchanging partitions or
final copying of CTAS tables. Don't try to
• NUM_JOBS_MCP maximum number of concurrent MCP jobs
• NUM_JOBS_MCP_<USER TABLE> Replace <USER TABLE> with the concatenated
user name and table name. The table sapr3.mseg
must be specified as “SAPR3MSEG”. The
parameter must be specified for each table copied
with the MCP method. The value of this parameter
defines the number of jobs running in parallel for
this table. The number of all tables jobs can't
exceed the values specified in NUM_JOBS_MCP
Once the parameters are defined you have to convert the scrips in the MCP directory first with
the ../bin/scheduler -C command. The scheduler will check for the tables present in the MCP
directory and will check for the correct parameter setting in the scheduler_conf.txt file. If
everything is set up correctly, the conversion is performed. Run the conversion in the MCP
directory after the conversion in the O2O/OUTPUT directory.
Once the conversion is completed, also start first the scheduler in the O2O/OUTPUT directory,
before starting the MCP scheduler in the MCP directory with the same command. You can increase
or decrease the number of running jobs by modifying the above mentioned scheduler parameters
during the run time.

10.1.2 Recommendation for handling large objects


Here are some general recommendations for handling very large tables (> 200 GB):
• Before using MCP calculate the available time frame and the calculated migration time
(incl. time for index creation) for these objects. Assume a transfer rate of 20 GB/ hours for a
single table using CTAS or PL/SQL
• Usage of MCP for partitioned tables is recommended. Check the number of partitions used
by the partitioned tables. If all data is located in a single partition, MCP will not work,
because each partition is copied in a single job.

© Oracle SAP Solution Center Walldorf Page 70 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• When running MCP jobs, don't start to many of the MCP jobs at the same time. You will
slow down other migration processes, if the source system is limited on IO operations.
• Consider to use an own network link for the MCP jobs
• LOB tables are known to be problematic when using MCP. Consider to set
“USE_MCP_TEMP_TABLE => FALSE” for these tables, to avoid the last copy step.
• Consider to use DataPump instead for these large objects. Using a network link the
throughput can be doubled for some tables (40 GB/h). When performing a local export,
using dumpfiles, datapump can run in parallel. This works also for the import.
• For large LOB objects this approach is sometimes the only one, which is reasonable for
LOB tables.

10.1.3 Known limitations


The MCP method has some known limitations, you should be aware, before considering to use this
method.
• MCP supports only range partitioned tables. List or Hash partitioned tables are not
supported. The package will switch to normal mode (either CTAS or DataPump), if a table is
List or Hash partitioned
• If a table is composite partitioned, the MCP scripts for the sub-tables will have an incorrect
syntax

10.2 Running a migration on the same host


Normally you will run migrations between different servers, e.g when performing a platform
change, or if the hardware is upgraded. But there can be the situation, that customer wants to
perform a database reorganisation or a database upgrade, but is either not willing or capable to use
a second server for it (normally we recommend to split an existing cluster for the duration of the
migration to use one of the servers as source and the other one as target).
The difficulty in such a scenario is how to handle the different mount points for the “sapdata”
directories and the two Oracle database instances running concurrently on the same server. We have
to avoid under any circumstances, to mix up both environment.
Important hint!!!!
When running this migration scenario, you need deeper know-how in running O2O
migrations. It's not suitable for beginners. There are many manual manipulations on the
scripts and/ or the server needed. So there is a good chance to corrupt source and target
system simultaneously in the case of an error.
It's possible to run the migration with two different approaches. For both you have to modify either
package scripts or you have to reconfigure the target database after the migration. It depends on you
and in which method you feel more secure, which you want to use. In both methods you have to
modify scripts or file names anyway.

© Oracle SAP Solution Center Walldorf Page 71 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

10.2.1 Approach 1: Modify package script files


For this approach you have to modify some of the package generated scripts, used for creating the
database and the init.ora file of the target instance. We will make use of the “SAPDATA_OFFSET”
parameter, which allows us to use a different starting point for numbering the sapdata directories.
So we can easily separate the datafiles from the old and the new systems by using different mount
points.
• Specify the same <SID> for the target system as used for the source system
• Set the parameter “SAPDATA_OFFSET” to a value, which is beyond the number of your last
sapadata directory, e.g. SAPDATA_OFFSET => 500
• Set the parameter DB_LINK to the SID of you source system
• Perform a normal package run to create the migration scripts
• Open the create database script and set the database <SID> to e.g. “XYZ”
• Change in the header of the trans.ctl file the database <SID> into “XYZ”
Change the path to the redo logfiles in this script as well. You should use an own filesystem
for the redo log files of the new system
• Check the cr_dir script for the correct sub-directory names for the control files, or create
these sub-directories manually
• Copy the init.ora file from the old database to initXYZ.ora
• Modify the path names to the control files and set the database name accordingly. Modify
also the other database parameters for the migration
• When logging in, don't forget to set the environment variable “ORACLE_SID” to “XYZ”,
to ensure that you are connecting to the target system.
• After the migration run the command “alter database backup control file to trace”. Modify
the generated trace file and replace the database name “XYZ” with the original <SID>
• Stop both databases on the server
• Start the new system with the command “startup nomount”
• Create the new control files and open the database.
• After the database was successfully opened, the migration is finished. You can now remove
the old database (database files) from the system.

10.2.2 Approach 2: Modify server side files


If your are more familiar with OS commands you may prefer the following method.
For this approach we will use a complete different SID for the migration. The file names and paths
are changed at the end of the migration

© Oracle SAP Solution Center Walldorf Page 72 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• Specify a different <SID> (e.g. “XYZ”) for the target system as used for the source system.
• Set the parameter DB_LINK to the SID of you source system
• Perform a normal package run to create the migration scripts
• Copy the init.ora file from the old database to initXYZ.ora
• Modify the path names to the control files and set the database name accordingly. Modify
also the other database parameters for the migration
• When logging in, don't forget to set the environment variable “ORACLE_SID” to “XYZ”,
to ensure that you are connecting to the target system.
• After the migration run the command “alter database backup control file to trace”. Modify
the generated trace file and replace the database name “XYZ” with the original <SID>.
Change also all occurrence of “XYZ” to the original <SID>
• Stop both databases on the server
• Dismount the old database files
• Change the path from the new database files to the old <SID>
• Change the mount points on the OS level to the new paths
• Mount the new datafiles to the correct location
• Start the new system with the command “startup nomount”
• Create the new control files and open the database.
• After the database was successfully opened, the migration is finished. You can now remove
the old database (database files) from the system.

10.3 Running the scheduler on a remote host


Normally the scheduler is running on the target system to execute all migration scripts. This works
fine if you are running on UNIX as a target system, but when using Windows for the target system,
it can become problematic to run the scheduling software on the windows machine. Windows
doesn't support ksh-shell natively. To run the scheduler scripts on a windows machine you have to
use a UNIX emulator (e.g. Cygwin), to run the scripts. Unfortunately there are a some problems
with the current Cygwin emulations. Beside a bad scheduler performance, we have seen “dead”
background schedulers, not starting any script for a long time.
So if you have a windows target system you can run the scheduling software also on a different
host. Prerequisites are:
• You need an installed UNIX or Linux system on this host
• You need installed Oracle client software, incl. Sqlplus on this machine
• You must have a working network connection to source and target machine

© Oracle SAP Solution Center Walldorf Page 73 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• You must be able to connect to source and target machine with sqlplus. So your Oracle net
must be configured to allow this connection.
• Sufficient disk space to store the export dump files.
• All other configurations must be in placed as for a normal migration
• Specify the package parameter “REMOTE_SCHEDULER => TRUE”
Note: You have to run the preparation phase manually on the target machine
The package will use the system/ manager account to connect to the target system. So please don't
change it on the target system.
The scheduling is running then quite normal. The only difference is, that all scripts are started on
the scheduling host, instead of the target machine. You can use for the scheduling even your own
laptop as long as the above mentioned prerequisites are fulfilled.

10.4 Running scheduler in Demo-Mode


For demonstrations or for any other test purposes you can run the scheduling in a special
demonstration mode. In this mode the scheduler will not really execute any migration script, but
will simulate the migration. The listed migration scripts are called, but not executed. This is helpful
if you either want to test the stability of the environment or if you want to demonstrate at a
customer the general migration process. So you can run the scheduling
To run the scheduler in the demonstration mode run the command “echo N > ./DemoMode” in the
OUTPUT directory. If this file is present in the OUTPUT directory, no migration script is really
executed.

11 Known limitations

11.1 Problems with SAP transaction DB02


After the migration, SAP transaction displays missing indexes and tables. This is related to the
usage of synonyms in the sap schema user, pointing to outdated data dictionary views. These
synonyms were created for performance problems in the data dictionary of Oracle 9i. Normally you
will find the following synonyms in the sap schema:
• USER_SEGMENTS
• USER_LOBS
• USER_INDEXES
• USER_IND_COLUMNS
To solve this problem, drop the above listed synonyms with the command
“drop synonym <user>.<synonym name>”

© Oracle SAP Solution Center Walldorf Page 74 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

Re-run the SAP transaction DB02 (check database) to refresh the display in DB02. The SAP note
#519448 describes this problem.

11.2 Database links


The user name used for database links are no longer available. You have to edit the script for
creating the database links manually.

11.3 Unsupported objects


At the moment (0.954) XML tables and XDB schema data is not exported. You have to create an
own datapump export/ import for these tables. Will be fixed in a future release.

11.4 Oracle 11g


Packages with a release number < 0.954 will not run on 11g. The package run will abort with a run
time error.
The shipped export utility with Oracle 11.2.0.1 will create unusable export files if LOB objects are
exported (Wrong syntax for the create table command). You have to use DataPump for the normal
Export/ Import scripts as well, if you source system is running on 11g and one of the exported tables
contains a LOB column.

11.5 Scheduler support for Datapump


At the moment the scheduler doesn't support all possible DataPump migration configurations.
Please have a look into the version specific documentation to be updated which configurations are
supported. As of January 2011 the following configurations are fully supported by the scheduler:
• Using DataPump as a replacement for CTAS, LOB and PL/SQL jobs, with or
without using a network link for the datapump jobs
• Using DataPump export/ import with dumpfiles, when using the special
export/import mode of the scheduler
The scheduler misses the support for the export/ import of tables as a replacement for the normal
export/ import utilities. This affects only Oracle 11g databases on the source. For 10g databases the
normal export/ import can still be used.

11.6 Lob Segments


By default the LOB segments of a table are placed in the same tablespace as the table, if the CTAS
method is used. If the LOB-segments have to be stored in a different tablespace, you have to use
DataPump for the migration of these tables. If your sourec system is running on Oracle 9i use the
following workaround:
• Set the package parameter CHANGE_TBS_LAYOUT => TRUE

© Oracle SAP Solution Center Walldorf Page 75 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• Set up the tablespace mapping in the file tbsmap.conf. If you don't want to change the
tablespace mapping in real, than specify all tablespaces with the mapping <Old tablespace>,
< New tablespace>
• Run the migration. All LOB tables are migrated now with export/ import. This can increase
the migration time

11.7 Excluded Usernames


The following database schema users and roles are excluded automatically by the package. These
users and roles are created automatically either as part of the database installation or by the
installation of additional functionality or sample schemas. If you want to include one of the listed
users or roles in the migration, you can add the name in the file “include_users.conf” in the INPUT
directory.
The package will list the names of the reserved users and roles in the LOG file, if they are found on
the source database.

11.7.1 Defined users to exclude


• ANONYMOUS
• APEX_PUBLIC_USER
• APEX_030200
• APPQOSSYS
• AURORA
• AURORA$JIS$UTILITY$
• AURORA$ORB$UNAUTHENTICATED
• BI This user is created with the oracle sample schemas. It owns the Business
Intelligence schema.
• CTXSYS
• DBSNMP
• DIP
• DMSYS --This user is used for Data Mining.
• DSSYS
• EXFSYS --This schema is used for expression filters.
• HR --Installed with Oracle sample schemas. It owns the Human Resource
schema.
• IX --Installed with Oracle sample schemas. It owns the Information Transport
schema.
• LBACSYS --The administration account for Oracle Label Security.
• MDDATA --Used by Oracle Spatial to store Geocoder and router data.
• MDSYS --The owner of spatial (which is a part of interMedia).
• MGMT_VIEW --Used for the Oracle Enterprise Manager Database Control.
• MTSSYS --Used for Microsoft transaction server support.
• O2O_OGG
• OASPUBLIC --
• ODM The Data mining schema. Since Oracle 10g

© Oracle SAP Solution Center Walldorf Page 76 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• ODM_MTR Used for the data repository for data mining samples.
• OE Installed with Oracle sample schemas. It owns the Order Entry schema.
• OE Installed with Oracle sample schemas. It owns the Order Entry schema.
• OLAPSYS The schema that owns the OLAP catalogs.
• ORACLE_OCM
• ORDDATA
• ORDPLUGINS Like mdsys: part of interMedia. Third party plugins (as well as Oracle
plugins) for interMedia are installed into this schema.
• ORDSYS The administration account for interMedia.
• OSE$HTTP$ADMIN
• OUTLN
• OWBSYS --
• OWBSYS_AUDIT --
• PERFSTAT
• PM Installed with Oracle sample schemas. It owns the Product Media schema.
• PUBLIC
• REPADMIN
• SCOTT Used for Oracle examples.
• SH Installed with Oracle sample schemas. It owns the Sales History schema.
• SI_INFORMTN_SCHEMA Used for SQL/MM Still Image Standard.
• SPATIAL_CSW_ADMIN_USR --
• SPATIAL_WFS_ADMIN_USR --
• SYS
• SYSMAN
• SYSTEM
• TRACESVR
• WEBSYS --
• WK_PROXY --Used for ultrasearch.
• WK_TEST
• WKPROXY
• WKSYS
• WMSYS
• XDB
• $JIS
• $UTILITY
• $AURORA
• $ORB
• $UNAUTHENTICATED

11.7.2 Defined roles to exclude


• CONNECT')
• RESOURCE')
• DBA')
• SELECT_CATALOG_ROLE')

© Oracle SAP Solution Center Walldorf Page 77 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

• EXECUTE_CATALOG_ROLE')
• DELETE_CATALOG_ROLE')
• EXP_FULL_DATABASE')
• IMP_FULL_DATABASE')
• RECOVERY_CATALOG_OWNER')
• GATHER_SYSTEM_STATISTICS')
• LOGSTDBY_ADMINISTRATOR')
• AQ_ADMINISTRATOR_ROLE')
• AQ_USER_ROLE')
• GLOBAL_AQ_USER_ROLE')
• PERFSTAT')
• HS_ADMIN_ROLE')
• OEM_MONITOR')
• OEM_ADVISOR')
• SCHEDULER_ADMIN')
• ORACLE_OCM')
• ADM_PARALLEL_EXECUTE_TASK'
• DATAPUMP_EXP_FULL_DATABASE
• DATAPUMP_IMP_FULL_DATABASE
• DBFS_ROLE')
• HS_ADMIN_EXECUTE_ROLE')
• HS_ADMIN_ROLE')
• HS_ADMIN_SELECT_ROLE')
• WM_ADMIN_ROLE')
• SCHEDULER_ADMIN')
• OEM_ADVISOR')
• GGS_GGSUSER_ROLE')

© Oracle SAP Solution Center Walldorf Page 78 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

12 Appendix A: Package parameters


This is a summary of all package parameters available for package versions >= 0.953. Older
versions may lack some parameters.
Please note: All size definitions are in MB: e.g. 10GB has to be specified as 10000

12.1 Package parameters for setting up and running the package:


ACT_KEY (NO DEFAULT VALUE)
This parameter must be specified, to run the package. The activation key is unique for a specific
database and the hostname. Changing the hostname requires also a new activation key.

CHECK_LR (DEFAULT VALUE TRUE)


Usage: CHECK _LR => FALSE
For SAP Cluster and Pool tables we must guarantee, that the buffer, used in PL/SQL is large enough
to hold the entire long raw field. The only possibility to get the maximum length of this field is a
table scan on the cluster/pool table. Especially on large tables this step can take a couple of hours.
For testing purpose you can skip this test, which will significantly decrease the package runtime on
the source system. You must perform at least one check before the final migration. The size
limitation for PL/SQL programs is 32760 bytes.
For NON-SAP system this parameter must be set to FALSE
CHK_DBA_TABLES (Default FALSE)
If activated target system size is calculated based on statistical data, instead of allocated table space
EXACT_PART_INDEX_DEF (Default False)
Usage: EXACT_PART_INDEX_DEF => TRUE
Normally a local index partition will have the same name as the underlying table partition. This is
the default, also in SAP BW.
If someone has renamed table partitions, the index partition name will not be changed by default.
Therefore, after recreation the local index on the new system, the partition of these indexes will
have a different name than on the source system, forcing statistic import to fail. If set to „TRUE“,
the package will check the real name of these index partitions and will create them with there
source name, instead of using the default. This behaviour is switched off for performance reasons
(package runtime).
Modus (Default = ‘ALL‘)
Usage: MODUS => ‘TBS‘
If you do not want to generate all scripts but only the scripts for the preparation phase, you can use
this parameter to run the package only partially.
ALL: Generates all scripts

© Oracle SAP Solution Center Walldorf Page 79 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

TBS: Generates only create database and create tablespace scripts


FORCE_USER_EXPORT (Default False)
If set to TRUE, the package will look for the file “user_full_export.conf” in the INPUT directory.
All users listed there will be exported with a full user export without further checking on size or
objects. This allows you to finish the package execution faster on systems with a very high number
of users, especially if the system is slow on accessing data dictionary views.
INPUT_DIR (Default = O2O_INPUT)
Usage: INPUT_DIR => ‘INPUT‘
This parameter specifies the name of the Oracle directory used for the input-files, which are read
during the package run. This is most important for the tablespace mapping file and external
templates/ scripts for tables and indexes.
LOG_DIR (Default = O2O_LOG)
Usage: LOG_DIR => ‘OUTPUT‘
This parameter specifies the name of the Oracle directory used for the „LOG“-directory, which
holds all logging information, generated during the package run
MCP_DIR (Default = O2O_OUTPUT)
Usage: OUTPUT_DIR => ‘OUTPUT‘
The O2O package uses the Oracle UTL_FILE-Package functionality to generate the migration
scripts on the server. To generate the scripts the package uses Oracle directory names pointing to the
designated path.
This parameter specifies the name of the Oracle directory used for the „OUTPUT“-directory, which
holds all migration scripts
OUTPUT_DIR (Default = O2O_OUTPUT)
Usage: OUTPUT_DIR => ‘OUTPUT‘
The O2O package uses the Oracle UTL_FILE-Package functionality to generate the migration
scripts on the server. To generate the scripts the package uses Oracle directory names pointing to the
designated path.
This parameter specifies the name of the Oracle directory used for the „OUTPUT“-directory, which
holds all migration scripts.
SHOW_HELP (Default False):
The help screen of the package. The help will list all parameters and a brief description on the
screen. To see the output you have to specify “set serveroutput on” before you issue the command.

12.2 Parameters influencing the set up of the target database


Instead of simply using the old file system layout, the package offers the possibility to create a
complete new database layout. Based on the current tablespace sizes, used space within the

© Oracle SAP Solution Center Walldorf Page 80 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

tablespace and parameter settings, a new tablespace layout is created. The tablespace layout follows
the SAP naming conventions
The datafiles are automatically distributed on the specified sapdata-subdirectories. All
subdirectories will be loaded equal in respect to the file size. There is no distribution based on the
tablespace type (Indexes,Tables, Temp). You find a summary for the needed file system sizes in the
LOG-directory.
For your convenience also a shell script for creating the needed subdirectory structure is created
(*cr_fs*.sh)

AUTO_PART_TAB (DEFAULT FALSE)


This parameter allows one to implement the automatic interval partitioning for selected tables. If set
to TRUE the package will search in the INPUT directory for the file “table_part.conf”. You have to
list in this file all tables, you want to use this feature for. You have first to specify the parameter
“AUTO_PART_TAB => TRUE”. If the parameter is set, the package will search in the INPUT
directory for the file “table_part.conf”. In this file you have to specify for each table the partitioning
definition. The format is: <owner>,<table>,<column>,<range>. All found tables are listed in the
package log file. If a table doesn't exist on the database, you will find an error message in the log
file. Please

ASSM (DEFAULT TRUE)


If set to TRUE, all tablespaces on the target system will created with Automatic Segment Storage
Management. Please note, that the ASSM feature can have a deep performance impact on large
tables, whcih are regularly archived. For these tables, it is not recommended to use ASSM.
ASSM is not recommended to use for frequently changed tables because of the „shotgun“ effect on
these tables. For these tables a combination of ASSM and partitioning is the best solution.
For ASSM to be enabled for a selected number of tablespaces, edit the create tablespace script.

CHANGE_TBS_LAYOUT (DEFAULT FALSE)


Usage: CHANGE_TBS_LAYOUT => TRUE
Setting this parameter will create the target database with a new tablespace layout.You can define
your own tablespace mapping by creating the file 'tbsmap.conf' in the INPUT directory. In each line
the source and the target tablespace, separated by a “,” is listed: PSAPVBRPD,PSAPVBRPD. This
manual mapping overrides all automatic mappings. For SAP the following automatic mapping is
used:
• Tables, residing in the TBS PSAPUSER1D/ PSAPUSER1I will be mapped to
PSAP<SID>USR
• Tables in the DD-TBS (PSAPEL700) will be mapped to PSAP<SID><VERSION)
(PSAPCCR700)
• All other tables will be mapped to the standard SAP tablespace (PSAP<SID>)
For NON-SAP systems the mapping listed in the file “tbsmap.conf” is used. If a tablespace is not
listed there, the tablespace name from the parameter “TBS_SID” is used for all not listed

© Oracle SAP Solution Center Walldorf Page 81 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

tablespaces. If the parameter is not specified, the old tablespace is used instead. So the assignement
works in three steps:
1. Check for an individual assigment
2. If a default tablespace name is defined, use this
3. Use the old tablespace name.

CHANGE_USER (DEFAULT FALSE)


Usage: CHANGE_USER => TRUE
When set the package searches for the file “change_user.conf” in the INPUT directory of the
package. All usernames, which should be changed are listed in a separate line with:
<Old username>, <New username>
The package will migrate all objects for the old schema into the new schema.

CONVERT_TO_LOB (DEFAULT FALSE)


Converts all LONG RAW data type to LOB data type. For SAP systems this is only applicable if
SAP kernel >= 7.00. When going to 11g this will also convert LOB type to secure files.

DB_FILE_SIZE (DEFAULT 10240 M)


Usage: DB_FILE_SIZE =>2000
Defines the default size used for datafiles. The parameter must be specified in MB. For having a file
size of 2000M specifiy: DB_FILE_SIZE => 2000. If a tablespace uses less space than the defined
file size, smaller datafiles will be created. For the last two datafiles, a rounding algorithm is used, to
avoid large differences in the file sizes. This algorithm also adjusts file sizes to the next „even“ size
(e.g. 1043M will become 2000M). The minimum file size for a tablespace is 100M.
Example 1: If a tablespace has a size of 85GB, the package will create with the default settings 7
data files of 10000M each. The last two datafiles will have sizes of approx. 8000M.

DB_TARGET_RELEASE (DEFAULT ''10.2'')


Specifies the database release for the target system. This affects the syntax used for table
compression Valid entries are 10.2, 11.1, 11.2

DATA_LW (NO DEFAULT)


Usage: DATA_LW => ‘EFGH‘
If DATA_LW is specified optimization is automically used
Specifies the drive letters for windows system, which will be used on the target system, for creating
the file system for datafiles. The drive letters depends heavily on the storage system of the target
system. You can calculate the needed drives by using the formula:
Expected system size / File system size
Example: Your database has an expected size of 500GB. From the OS you will get logical volumes,
having a size of 32GB each. So you will need 16 drives (16 * 32GB = 512 GB) to store the

© Oracle SAP Solution Center Walldorf Page 82 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

database. Information about the actual tablespace sizes and the calculation is stored in the LOG-
directory. The file name is *cr_tbs*.log

EXCLUDE_USER ( DEFAULT FALSE)


Usage: EXCLUDE_USER => TRUE
When set the package searches for the file “exclude_user.conf” in the INPUT directory of the
package. All users, which should not be migrated are listed in a separate line. The space allocated
by the users is not taken into account, when the needed tablespace sizes for the target system is
calculated.

FILE_MIN_SIZE (DEFAULT 100 M)


Usage: FILE_MIN_SIZE =>2000
Defines the minimum size used for datafiles. The parameter must be specified in MB. For having a
file size of 2000M specifiy: FILE_MIN_SIZE => 2000. All datafiles will have at least this defined
minimum size

IND_EXTENT_SIZE (DEFAULT 8)
Usage: IND_EXTENT_SIZE => 1
Is defined in MB. Specifies the initial extent size used for index creation on the target size. It might
be necessary to lower this value if your system is a BW-system to avoid a strong growing of the
target tablespace, because this extent size is used for each table partition and for each PQ-process.

INDEX_COMPRESSION ( DEFAULT TRUE)


Usage: INDEX_COMPRESSION => FALSE
This parameter enables the index compression on the target system. This functionality will analyze
all indexes of those tables, migrated by separate scripts, on the source system. Based on this
analysis, the indexes will be compressed on the target system. The result set is stored in the table
“O2O_INDEX” of the SYS schema. At the end of each run the result of the index compression
calculation is saved in the INPUT directory either in the file “sap_index_<time>.txt or for NON-
SAP systems in the file “index_<time>.txt. You canuse these files to transport the results of the
index calculation between different systems, to save time.
The first package run can take significantly longer, because of the index analysis. Consecutive runs
will read the result from the table “O2O_INDEX”, and will not take longer than normal. If an index
is missing in this table it will be added automatically. You can save up to 50% space on the indexes
and up to 15% of the overall database size on the target system.

OPT (DEFAULT TRUE)


Usage: Opt => TRUE
Switches the optimization for creating a new tablespace layout on/off. If this parameter is set to
FALSE, the original database layout from the source system is used. If you want to create an new
tablespace layout, you must set OPT => TRUE. If this parameter is set to FALSE, all other
parameters, affecting the file system layout are ignored.

© Oracle SAP Solution Center Walldorf Page 83 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

REDO_LW (NO DEFAULT)


Usage: REDO_LW => ‘EF ‘
If DATA_LW is specified this parameter must also be specified.
Specifies the drive letters for windows system, which will be used on the target system, for creating
the REDO-Log file system. The drive letters depends heavily on the storage system of the target
system.The REDO-Logs are placed by default on two different drives. If you want to have more
drives, you have to edit the create database script manually.

SAPDATA (DEFAULT 10)


Specifies the number of sapdata#-sub-directories, which will be used on the target system, for
creating the file system. The number of sapdata's depends heavily on the storage system of the
target system. You can calculate the needed number of sapdata's by using the formula:
Expected system size / File system size
Example: Your database has an expected size of 500GB. From the OS you will get logical volumes,
having a size of 32GB each. So you will need 16 sapdata (16 * 32GB = 512 GB) to store the
database. Information about the actual tablespace sizes and the calculation is stored in the LOG-
directory. The file name is *cr_tbs*.log. You can run the package multiple times to get best results

SAPDATA_OFFSET (DEFAULT 1)
Usage: sapdata_offset => 500
Allows you to specify a different start number for numbering the sapdata mount points than the
standard “1”. Is used to run a reorganisation on the same host, by using different instance names.
Instead of renaming the directory later to the correct instance name, you can specify different mount
points, by using a different starting point for the sapdata directory. So after the successful migration
only a rename of the database by recreation of the control files is needed
SID (no Default)
Usage: SID => ‘Q01‘
Defines a new target <SID> for the database. If the parameter is not specified, the target <SID> will
be the same as the source <SID>. This <SID> parameter is used for all scripts, where the target
<SID> is necessary.

TABLE_MAPPING (DEFAULT FALSE)

Usage: TABLE_MAPPING => TRUE


If this parameter is set, you have to define the file “tablemap.conf” in the INPUT directory. The file
structure is: <username>, <tablename>, TBS table, TBS index.
All four parameters must be specified. Lines starting with a “#” character are ignored. All tables
listed here will be redirected to the new defined tablespace as well as the indexes to their one. You
can even specify non-existing tablespaces here. These tablespaces will be created automatically
with the appropriate size. If only single tables are redirected, but not whole tablespaces, you do not

© Oracle SAP Solution Center Walldorf Page 84 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

have to specify these tablespaces in the file “tbsmap.conf”.Tablespace remapping and table mapping
can be used in any combination.

TABLE_COMPRESSION (DEFAULT FALSE)


Enables table compression on the source system. For this the file "table_compress.conf" in the
INPUT directory is read. This file contains a list of tables to compress
The syntax is USER,TABLE, Compression method
• Whereas the user name must be specified exactly, you can use wildcard character and partial
strings to include a group of tables
SAPR3,* means all tables in schema SAPR3
• SAPBIW,/BIO/* means all tables starting with the string /BIO/ in their name
• SAPBIW,*FCZ means all tables having the string FCZ in their name
• sapbiw,/BIC/FFIC_C05 means the exact table name
You can use all combinations in any order
Example: sapr3,mseg,psapmseg,psapmseg
For SAP systems the package will exclude tables from the compression, which are defined in the
SAP note.

ENABLE_LOB_COMPRESSION (DEFAULT TRUE)


If disabled LOB tables will not become compressed on 11g. Otherwise the package will execute a
compression of the LOB segment with “medium” compression.

TBS_IDENTIFIER (NO DEFAULT)


Usage: TBS_IDENTIFIER => 'PSAPSR3'
This parameter is only valid in conjunction with specifying a new SAP tablespace layout. Instead of
using the SAP default naming convention (PSAP<SID>), the identifier specified in this parameter is
used to create the tablespace name by using “PSAP” as a prefix.

TAB_EXTENT_SIZE (DEFAULT 256M)


Usage: TAB_EXTENT_SIZE => 512
Default = 256
Is defined in MB. Specifies the initial extent size used for table creation, when using CTAS,
PL/SQL or TblCopy. It might be necessary to lower this value if your system is a BW-system to
avoid a strong growing of the target tablespace, because this extent size is used for each table
partition.

TBS_FREE (DEFAULT 25)


Usage: tbs_free => 10 Specifies the amount of additional space, which is added to a tablespace,
when the tablespace size is calculated. This is a percentage value of the free space found for the

© Oracle SAP Solution Center Walldorf Page 85 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

tablespace. Example: If a tablespace has a size of 200 GB, and 100 GB is not used, the tablespace
size for the target system is calculated with: used_space + (free_space / 100 * tbs_free).

TBS_MIN_FREE 0)
Usage: tbs_min_free => 10
Specifies the amount of additional space, which is added to a tablespace, when the tablespace size is
calculated. This is a percentage value of the used space found for the tablespace. Example: If a
tablespace has a size of 200 GB, and 100 GB is not used, the tablespace size for the target system is
calculated with used_space + (free_space / 100 * tbs_min_free)

TBS_RESIZE (DEFAULT TRUE)


Usage: TBS_RESIZE => FALSE
If set to „FALSE“, the script will not perform any recalculation on the tablespace sizes found on the
source. The tablespaces will therefore have the same sizes on the target, as on the source +
TBS_MIN_FREE

TEMP_DEF (DEFAULT NULL)


If there is more than one temporary tablespace specify the one which should become the default
one

TEMP_SPACE (DEFAULT 0)
Usage: TEMP_SPACE => 40000
This parameter specifies the size of the temporary tablespace on the target system. The value is
defined in MB. Normally, the temporary tablespace will have the same size on the target, as found
on the source system. During the migration process, the temporary tablespace may be heavily used,
druing index creation. So it might happened, that some „create index“ scripts will fail, because the
temporary tablespace is too small. This parameter will override the source system definitions of the
temporary tablespace, and the default temporary tablespace is created with the size specified in this
parameter instead.

UNDO_DEF (DEFAULT NULL)


If there is more than one UNDO tablespace specify the one which should become the default one

UNDO_SIZE (DEFAULT 0)
Usage: UNDO_SIZE => 20000
If set to a value > 0 a UNDO tablespace is created during the create database command. The setting
overrides any found vlaue on the source. If there is no UNDO tablespace on the source, this
parameter helps you to switch the target system to the automatic UNDO management. It is on your
responsibility to set up the init<SID>.ora file correctly and to ensure that the customer is not
running out of space in the UNDO tablespace

USE_AUTOEXTEND (DEFAULT FALSE)


Usage: USE_AUTOEXTEND => TRUE

© Oracle SAP Solution Center Walldorf Page 86 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

If this parameter is set, all datafiles will be created with an initial size of 1024M. The final size of
the datafiles is set to the value of the parameter „DB_FILE_SIZE“. Each files is enlarged by an
increment of 1024M during the migration. Without waiting for the creation of all datafiles in their
final size, you can start with the migration much faster, saving time during the preparation steps by
increasing the migration time itself slightly.

12.3 Parameters influencing the migration process


CL_CALC (DEFAULT FALSE)
Usage: CL_CALC => 100
For an automatic estimation of the clustering factor on the unique index of the table, the parameter
CL_CALC is used. The calculation is working on the number of table blocks. If the clustering
factor of the unique index is higher than:
number of table_blocks x CL_CALC
either a hint or an “order by” phrase is generated in the SQL statement. This applies to the
USE_HINT and USE_ORDER parameter. Please note: The table must have an unique index to
create the „hint“or the „order by“ into the statements.

CHECK_PRIV (DEFAULT FALSE)


If activated, also privileges are compared on the source and the target system

DB_LINK (NO DEFAULT)


Usage: DB_LINK => ‘V902_OLD‘
Defines the default database link name, which is used in the migration scripts for the connection to
the source system. The name must be equal to the „tnsnames.ora“ alias created during the setup of
the target system. This name is used to create the database link automatically during the database
creation. The database link also needs a valid username/password to connect to the source system.

DB_LINK_CTAS (NO DEFAULT)


Usage: DB_LINK_CTAS => ‘V902_OLD1‘
Defines the database link name, which is used in the CTAS and CTASmigration scripts for the
connection to the source system. This link is also used for the LOB tables. The name must be equal
to the „tnsnames.ora“ alias created during the setup of the target system. This name is used to create
the database link automatically during the database creation. The database link also needs a valid
username/password to connect to the source system. If the parameter is not specified, the value of
„DB_LINK“ is used.

DB_LINK_EXP (NO DEFAULT)


Usage: DB_LINK_EXP => ‘V902_OLD3‘
Defines the database link name, which is used in the export migration scripts for the connection to
the source system. The name must be equal to the „tnsnames.ora“ alias created during the setup of
the target system. This name is used to create the database link automatically during the database

© Oracle SAP Solution Center Walldorf Page 87 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

creation. The database link also needs a valid username/password to connect to the source system.
If the parameter is not specified, the value of „DB_LINK“ is used.

DB_LINK_PART (NO DEFAULT)


Usage: DB_LINK_PART => ‘V902_OLD3‘
Defines the database link name, which is used in the migration scripts of the partititioned tables for
the connection to the source system. The name must be equal to the „tnsnames.ora“ alias created
during the setup of the target system. This name is used to create the database link automatically
during the database creation. The database link also needs a valid username/password to connect to
the source system. If the parameter is not specified, the value of „DB_LINK“ is used.

DB_LINK_PL (NO DEFAULT)


Usage: DB_LINK_PL => ‘V902_OLD2‘
Defines the database link name, which is used in the PL/SQL or Tblcopy migration scripts for the
connection to the source system. The name must be equal to the „tnsnames.ora“ alias created during
the setup of the target system. This name is used to create the database link automatically during the
database creation. The database link also needs a valid username/password to connect to the source
system. If the parameter is not specified, the value of „DB_LINK“ is used.

EXCLUDE_USER (DEFAULT FALSE)


If activated you can exclude user schemas from the migration. These usernames must be listed in
the file "exclude_user.conf" in the INPUT directory

EXCLUDE_TABLE (DEFAULT FALSE)


If activated you can exclude table contents from the migration. These tables will be created empty
on the target. You have to specify the tables in the file "exclude_table.conf" in the INPUT directory

EXP_DIRECT (DEFAULT FALSE)


If activated export uses direct option

GEN_PL BOOLEAN (DEFAULT TRUE)


Usage: GEN_PL=> FALSE
SAP Cluster and Pool tables will be copied using PL/SQL-procedures. These procedures are for
some tables unacceptable slow, if using 8i as source system. To optimize the migration for such
tables, you can create for all tables normally migrated with PL/SQL separate export/import scripts.
To speed up the import, some table and index parameters will be adjusted on the source system just
before the export.

GEN_PRIVS (DEFAULT TRUE)


Usage: GEN_PRIVS => TRUE
In SAP we will not create all privileges owned on the source system by the SAP user. Instead the
migration responsible is forced to use the release specific scripts, shipped by SAP, to set up the
privileges accordingly. The following scripts are well known:
sqlplus /nolog @oradbusr.sql <SID> UNIX <SID> x

© Oracle SAP Solution Center Walldorf Page 88 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

sqlplus /nolog @/sapmnt/<SID>/exe/sapconn_role.sql


sqlplus /nolog @sapmnt/<SID>/exe/sapdba_role.sql <SID>
sqlplus /nolog @special_10g.sql

LOGIN_PW (DEFAULT "MANAGER")


Usage: LOGIN_PW => ‘ABC‘
Defines the password of the system user. The password is used to connect to the source system. If
the password is wrong, the migration cannot run. Please note, that all database links will use the
user „system“ having the password specified by this parameter, to connect to the target system. If
you are unsure reset the password to „manager“.

LOGIN_USER (DEFAULT "SYSTEM")


Password used to connect to the system schema on source user during the migration

MAX_PARALLEL (DEFAULT 32)


Usage: MAX_PARALLEL => 64
Specifies the maximum number of parallel query processes used in a single index creation step on
the target system. The default number is appropriate for the most systems, because larger numbers
will overload the target system. The number of PQ processes used in the index creation is
automatically calculated during package run. Normally two PQ processes are used for each 1GB
peace of an index. The maximum number allowed is limited by the parameter “MAX_PARALLEL”

MAX_SIZE (DEFAULT 500)


Maximum segment size (in MB), for tables to include in export/ import jobs

MAX_TABS (DEFAULT 1000)


Usage: MAX_TABS => 500
Specifies the maximal number of tables included in a single export/import script.
This parameter has influence on the transistion process. If it is set to high, a huge amount of small
tables will be included into a single export/import script.

PL_ARRAY (DEFAULT 20000)


Usage: PL_ARRAY => 50000
Increases the number of records fetched in the PL/SQL procedures for the bulk select. The default
of 10000 works fine for most systems. Within the PL/SQL script the commit takes place after these
number of rows were fetched and inserted. For high performance systems, you can increase this
number to much higher values e.g. 100000. Please note, that also the insert takes place as a „Bulk
Insert“. So if you choose very high values for this parameter, you must have a very fast disk
subsystem for REDO and for data files. Otherwise either LGWR or DBWR, or even both,will
become the bottleneck. Here are some measurements on a small system reflecting the influence of
this parameter on the copy runtime for a cluster table with 1.2 million records:
1000: 9:00 minutes

© Oracle SAP Solution Center Walldorf Page 89 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

10.000: 6:22 minutes


50.000: 6:27 minutes
100000: 7:03 minutes

REMOTE_SCHEDULING (DEFAULT FALSE)


Usage: REMOTE_SCHEDULING => TRUE
This parameter is most helpful when you are migrating to Windows. Normally, if the target system
is Windows, you have to run the scheduler in a “cygwin” UNIX emulation software. Unfortunately
this emulation (ksh-emulator) is unstable. etting this parameter allows one to run the scheduler on a
different host (e.g. Linux). Running the scheduler on a different host, can also speed up the
scheduling, when the system is heavy loaded. Please note: You have to run the prepare phase
manually, because of the need of a “sysdba” access to database

RESET_PW (DEFAULT TRUE)


Usage: RESET_PW => FALSE
If set to „TRUE“ a script is generated which sets the passwords of all users after the migration on
the target system to the original encrypted value of the source system. This allows to export users
even with unknown password or with the need to reset all user passwords on the source system.
This is used mainly on Non-SAP systems, where multiple users exists. This avoids to reset all
passwords manually after the migration. It's also safe, because only the encrypted values are used.

REXP (DEFAULT FALSE)


Usage: REXP => TRUE
During the migration exports can only run on the target system, if the Oracle version on source and
target are the same. If you are using different Oracle version, you must run the export on the source
system. Running the export on the source system will unload the target server. Setting this
parameter to “TRUE” means export is running on the source system, whereas “FALSE” runs the
export on the target system.
The parameter is used only for the export utility, but nor for DataPump. DataPump exports are
always executed on the source system.

SAP DEFAULT (TRUE)


If you are planning to use the package for a NON-SAP system you have to specify this parameter.
This will switch off all SAP default settings, e.g. naming conventions for directory paths and the
check on the length of a Long raw field. Please note, it is in your responsibility to check correct
migration of Long Raw values.

SCHEMA (DEFAULT "SAPR3")


Specifies the schema name for the SAP system user. This information is needed to identify the
SAP schema user uniquely within the database. This information is used for index compression and
to identify SAP specific table (TAORA, IAORA)

© Oracle SAP Solution Center Walldorf Page 90 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

SPECIFY_PLSQL_TABLE (DEFAULT FALSE)


If activated you can force to generate PL/SQL scripts for tables having a Long Raw field. You have
to specify the tables in the file "include_plsql.conf" in the INPUT directory

USE_DATAPUMP_FOR_EXP (DEFAULT FALSE)


Will use oracle DataPump for general Export/import files (Not yet implemented

USE_DATAPUMP_FOR_LOB (DEFAULT FALSE)


Will use oracle DataPump for LOB tables

USE_DATAPUMP_FOR_PL (DEFAULT FALSE)


Will use oracle DataPump for PL/SQL tables

USE_DATAPUMP_FOR_CTAS (DEFAULT FALSE)


Will use oracle DataPump for PL/SQL tables

DP_LOB_MIN_SIZE NUMBER (DEFAULT 0)


All LOB-tables larger than this size (in MB) will be copied by using DataPump

DP_PL_MIN_SIZE (DEFAULT 0)
All Long Raw tables larger than this size (in MB) will be copied using by DataPump

DP_CTAS_MIN_SIZE (DEFAULT 0)
All CTAS tables larger than this size (in MB) will be copied by using DataPump

DP_PART_MIN_SIZE (DEFAULT 0)
All Part-tables larger than this size (in MB) will be copied by using DataPump

USE_LINK_FOR_DP (DEFAULT FALSE)


Will use oracle a network link for DataPump scripts to avoid export step

DUMPFILE_DIR (DEFAULT O2O_OUTPUT)


This allows one to place the DataPump dumpfiles in a different directory then the LOG files.Please
note: If not defined, the needed directory is automatically created on the target host. On the source
the standard OUTPUT directory is used. If a separate directory is used, then you have to specify the
defined Oracle Directory manually on source and target database with the "create directory"
command

UNIX_SYNTAX (DEFAULT FALSE)


Usage: UNIX_SYNTAX => TRUE
This parameter is only used if you are migrating to Windows. Normally, if the target system is
Windows, all scripts will be generated with the extension „cmd“ and a UNIX specific syntax within
the scripts is not created. If you want to use a UNIX emulation on Windows (e.g. CYGWIN) to
make use of the scheduler, this parameter will force the package to create the scripts with the
correct UNIX syntax even for Windows as target.

© Oracle SAP Solution Center Walldorf Page 91 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

USE_EXTERNAL_TABLE_SCRIPT (DEFAULT FALSE)


Usage: USE_EXTERNAL_TABLE_SCRIPT => TRUE
If you want to include a self generated script for some tables, e.g. for implementing IOTs or
partitioning, you can set this parameter. For all tables copied with CTAS, the package will search in
the INPUT directory for a file with the name: “username_tablename.tpl“ All special characters in
the tablename are replaced by numbers, to avoid OS-conflicts. The file name must be in lowercase.
If you are not sure about the naming, have a look into the CTAS.log file. All expected file names are
listed in the log-file. The script is read completely and will be used instead of the normal package
scripts.

USE_EXTERNAL_TEMPLATE_SCRIPT (DEFAULT FALSE)


Usage: USE_EXTERNAL_TEMPLATE_SCRIPT => TRUE
If you want to implement a simple partitioning on all tables copied with the CTAS-method, you can
use this parameter. If this parameter is set, then the package searches in the INPUT-directory for a
file with the name „username_tablepart.tpl (lowercase). This template script has the following
structure:
1. Line: Partitioning method,column_name, Tablespace for Maxvalue partition
2. Line: value1,Tablespace for value1, Partition name for value1, value2, Tablespace for value2,
Partition name for value 2, value3,..,..
The tablespace name will be used as written, whereas the partition name will be added to the table
name list,mandt,PSAPMAXVALUE
'100',PSAP100,_P100,'500',PSAP500,_P500,'800',PSAP800,_P800
At the moment only single column partitioning is supported.

USE_HINT (DEFAULT "N")


Usage: USE_HINT => ‘A‘(utomatic),‘F‘(orce),‘N‘(one)
As part of the migration, all table data is copied to the new system. Especially in SAP systems
running for a long time with active archiving, fragmentation on row level within the data blocks has
occurred. This forces the database to generate significantly more IO operations. To optimize the
tables in respect to the row order, it is possible to perform a sorted insert into the new table. The
parameter adds a Hint syntax clause to the select statement, forcing oracle to perform a select, using
an index scan on the source system. If you set the parameter to ‚‘A‘ th apackage will make an
automatic estimation (see parameter CL_CALC). The runtime for copying the data will increase,
because using the index instead of a FTS can run much longer, especially when the clustering factor
on the source system is very high. You can increase the index lookup by having a large buffer cache
on the source system. It is not recommended to use „USE_ORDER“ and „USE_HINT“ in
combination. Test the parameter „USE_HINT“ first, before testing „USE_ORDER“.

USE_ORDER (DEFAULT "N")


Usage: USE_ORDER => => ‘A‘(utomatic),‘F‘(orce),‘N‘(one)

© Oracle SAP Solution Center Walldorf Page 92 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

As part of the migration, all table data is copied to the new system. Especially in SAP systems
running for a long time with active archiving, fragmentation on row level within the data blocks has
occurred. This forces the database to generate significantly more IO operations. To optimize the
tables in respect to the row order, it is possible to perform a sorted insert into the new table. The
parameter adds an „Order BY“ clause to the select statement, forcing oracle to perform a sort
operation on the target system. The runtime for copying the data will increase, because the sorting
oepration requires a sort for all rows first. This parameter is helpful, if the index lookup on the
source system takes to much time. In this case a FTS on the source and performing a sort on the
target can be faster than the index lookup on the source. It is not recommended to use
„USE_ORDER“ and „USE_HINT“ in combination.

USER_MAX_SIZE (DEFAULT 2000)


Usage: USER_MAX_SIZE => 500
The parameter is specified in MB.
Specifies the maximal size of all segments( tables, indexes, LOB etc.) a user may own before the
user is exported in multiple scripts, instead of a single full user export.

12.4 Parameters used for splitting very large tables


USE_MCP_CTAS (DEFAULT FALSE)
If specified MCP splitting will be applied on CTAS and CTAS-LOB tables

USE_MCP_PART (DEFAULT FALSE)


If specified MCP splitting will be applied on partitioned tables

USE_MCP_PL (DEFAULT FALSE)


If specified MCP splitting will be applied on PL/SQL tables

PAR_MIN_SIZE_CTAS (DEFAULT 200000)


Size in MB for the minimum tables size for applying MCP on CTAS and CTAS-Lob tables

PAR_MIN_SIZE_PART (DEFAULT 200000)


Size in MB for the minimum tables size for applying MCP on partitioned tables

PAR_MIN_SIZE_PL (DEFAULT 200000)


Size in MB for the minimum tables size for applying MCP on PL/SQL tables

USE_MCP_TEMP_TABLE (DEFAULT TRUE)


Uses intermediate tables for the MCP-process

PARTITION_VALIDATION (DEFAULT FALSE)


If activated during the MCP process partitions will be checked when building up the target table

© Oracle SAP Solution Center Walldorf Page 93 of 94 Release 1.2 02/15/11


Workshop documentation O2O migrations

12.5 Parameters for Non-SAP systems


DATA_DIR (NO DEFAULT)
Usage: DATA_DIR => ‘db_‘
This parameter is used on NON-SAP systems to specify the dynamically part (mount points) of the
directory for the datafiles. This is used as prefix, which is extended by the number of the mount
points (parameter sapdata) to distribute the data files on the available volumes.

DB_FILE_BASE (NO DEFAULT)


Usage: DB_FILE_BASE => ‘/database/.../...‘
This parameter is used on NON-SAP systems to specify the basic part of the directory for the
datafiles. This is used as a fixed value.

REDO_FILE_BASE (NO DEFAULT)


Usage: REDO_FILE_BASE => ‘/database/.../...‘
This parameter is used on NON-SAP systems to specify the basic part of the directory for the redo
files. This is used as a fixed value.

© Oracle SAP Solution Center Walldorf Page 94 of 94 Release 1.2 02/15/11

You might also like