Asritha Kolli: OS/DB Migration Using SUM With DMO Tool
Asritha Kolli: OS/DB Migration Using SUM With DMO Tool
Asritha Kolli: OS/DB Migration Using SUM With DMO Tool
When we want to migrate the existing Application server as ABAP SAP system to SAP HANA DB , or
another databases and migrating the operating system based on DB compatible or any other
operating system. We can choose the in-Place migration avoiding landscape changes(SID,
Hostname).
A classical migration is complex and requires several steps to be considered. We can use the
database migration option of the Software Update Manager.
Update can be the Applying of Support Packages or a release change or EHP update or SAP upgrade.
SUM(Software Update Manager): SUM is the tool for updating SAP Systems based on the Netweaver
olli
DMO(Database Migration Option): DMO is an option of SUM for a migration. Its not a separate Tool.
SWPM:Software Provisioning Manager is a tool for system installation, copy, rename or dual stack
split, especially used for the heterogeneous system copy.
SUM 2.0: SUM 2.0 can be used by the Netweaver 7.5 and Above versions
ith
Benefits of the SUM Tool:
Pre-Requisites:
The Following Prerequisites need to be checked during the OD/DB Migration and SAP update.
4. Ensure that the Source system is single stack and is SAP system AS ABAP.
5. Ensure that SAP System with an SAP HANA Database requires a specific SAP Software level.
Check the PAM for the compatability and SAP system has to be updated before the
migration takes place.
6. SUM can be driven by non-certified OS/DB migration candidate.
7. Configure the latest host agent for SUM
8. Download the stack.xml file and include the software files has to be downloaded
9. Migration using the SUM requires, Atleast one patch or addon are need to update or
upgrade
10. The SAP System will shut down during the Process, So Obtain the Down time from the
Business Teams, Lock the users using EWZ5
11. There are no Open or Locked Transport Requests in SE01/SE03.Release the Transports
12. Sizing for the target System has to performed in advance and Procurement is done according
olli
to Sizing Results
13. Check the OS/DB and Product Compatibility in Product Availability Matrix
14. SMIGR_CREATE_DDL (ABAP Report) — Generates DB-specific DDL statements for
nonstandard DB objects of the ABAP dictionary (mainly BW objects)
15. Check whether the Tables exist in both database and ABAP Dictionary else the Migration
Fails.
ak
16. Ensure that Source Kernel and Target Kernel are Same or Target can be higher than the
Source Level
17. Ensure that Enough Space is available for SUM Directory along with read and write
Permissions.
18. The Migration Tools R3load,R3ldctl,R3szchk,R3ta need to be updated to the latest version
ith
19. Database Statistics has to run in advance using DB13, which will reduce load and unload
times during Export and Import
20. It is recommended to Suspend batch Jobs using BTCTRNS1 in SA38, so that they are not
accidentally started after the Import.
21. Delete the Old Spool Logs, Temse, SOST Logs, Inbox Logs(Softcont1),IDOC,RFC and other
Asr
logging tables
22. Ensure that All the Customer Programs pointing to Custom tables are generated with
SQL(DDL Statements) using Report SMIGR_CREATE_DDL* Report
23. Ensure that time lines during the migration are Documented in Phase Wise. It is also possible
to get the timelines for each package from package.log
24. Migration Key is required for Migration. It is generated in the Market Place based on the
Source System and Target System Details.
25. Source System DDIC Password in Client 000 is required. Ensure that a Login is performed
into 000 Client with the DDIC User ID and documented password.(if this is failed entire
activity has to be repeated)
26. Ensure that Database is Consistent and if Required run the database consistency Check in
DB13
27. Monitor the Space and Memory Utilization during the migration
28. Download the Latest SUM tool Based on the Product /Component along with the SUM
Compliant Kernel.
29. Install the target database and db client and establish the network communication between
the database server and source system.
SUM Tools:
olli
SUM Process:
Go to the downloaded SUM Path and uncar the SUM tool by login as sidadm using SAPCAR.
The SAP Host Agent requests authorization from the browser. This user is used to start the SUM.
Because the DMO procedure is only working on AS ABAP based systems(for which the SAPup is the
ith
relevant SUM part), the SAPup is started.
After some basic configuration settings, such as checking the stack.xml, the SAPup will start to create
the shadow system. The shadow system consists of a shoadow repository and a shodow instance.
The shadow repository is created on the source database, in a separate section. It contains the basic
Asr
tables and some customizing tables, which will already be updated to the target release during the
uptime. The shadow repository does not influence the production repository system is still running,
end users can work in the system functionality that cange application data on the database.
The shadow instance is running on the Primary Application system and is based on the shadow
kernel. The shadow kernel is the kernel for the source database, but for the target release.
The shadow system requires additional database space, and resources on the application server
host. The SAPup checks the status and ask for additional resources if required.
Because the shadow repository is being built up on the target release , changes on the PRD
repository are no longer allowed, as they would not be considered on the shadow repository.
This is why in this phase, the system is running and available for end users(uptime processing), but
the development environment is locked.
After the shadow repository has been built up completely. It is copied into the target databse.
The kernel executable R3load is triggered by SAPup to execute the copy of the shadow repository.
Uptime migration needs the two additional kernels are required for the DMO procedure, both on
the new SAP release, but one for the source databse, and one for the target tabase.
Now the application tables have to be updated to the new release, so the system has to be shut
down to prevent the changes on the application tables.
It is the kernel for the new database and for the target release.
For the migration of the application data, two R3load processes are running as a pair. The first
olli
R3load of the shadow kernel exports the data from the source database. And the second R3load
process imports the data into the target tdatabase. Both R3load processes are running on the PAS
host. The DMO configuration includes configuring the number of R3load pair processes to run in
parallel.
After the migration of the application data, the shadow instance is removed. The target kernel is
ak
now used for the system and the system is started. The system is still in downtime because it cannot
be used by the end users
R3load Modes:
ith
The DMO procedure uses the R3load for the migration, like the classical migration based on software
Provisioning Manager(tool inside is SAPinst) does. For the typical classical migration the R3load file
mode is used.
Asr
The file mode means that the export files are created and imported later. Meanwhile, it is also
possible to use a parallel export and import for the classical migration
Another possibility for the classical migration is to use the R3load socket mode, which transfers the
files using a socket connection
With the DMO procedure, using the inplace migration approach, both R3load processes are
executed on the same host, the PAS host. This allows the use of the R3load pipe mode, which
transfers the data using the main memory of the host. No files are created, and so on directory has
to be prepared to host all export files.
In case, The R3load stops, the SAPup will restart the process without the need for manual
intervention by a user.
Incase an inplace migration is not desired, you can perform DMO with system move.
DMO of SUM offers the move of the primary application server from the source system landscape to
a target system landscape during the DMO procedure.
SUM starts the system update and database migration procedure on the host of the PAS and
executes the first part of the procedure, including the export of the database content into files.
Then the files and the SUM directory have to be manually transferred to the target host, and the
remaining part of the SUM with DMO procedure happens there.
Finally the DMO procedure is finished.The system is now migrated to the target database and
updated to the target release.
olli
Uptime processing: The shadow repository is created on the source database (on target release) and
then copied to the target database.
Downtime processing: The application tables are migrated to the target databse, and converted to
the target release later(In phase PARCONV_UPG).
SAPup Process:
ak
(Sizes) SAPup determines the table sizes used by the preparation phases
(Create) SAPup triggers the tables creation on the target database
(Prp) SAPup triggers the creation of directories(migrate_*) and control files(required for
ith
R3load execution:STR, TSK, CMD files)
(RUN) SAPup triggers migration of tables into the target database(based on control
files)(*.TSK files can be recreated by itself if any errors are raised)
After the table creation on the target database, The DMO procedure shows a dialog
proposing the landscape reorganization.
Asr
9EU_CLONE_MIG_UT_SIZES
EU_CLONE_MIG_DT_SIZES
REQ_SCALEUP_PREREQ(scale up, if scaleout was not choosen)
EU_CLONE_MIG_UT_CREATE
EU_CLONE_MIG_DT_CREATE
EU_CLONE_MIG_UT_PRP
EU_CLONE_MIG_UT_RUN(end of exercise)
EU_CLONE_MIG_DT_PRP(DT starts afterwards)
EU_CLONE_MIG_DT_RUN(only the migration of the application data is done during the
downtime)
(Migrate _ut_create) creation of repository tables
(Migrate_UT ) Prep and run of repository table content migration
(Migrate _dt_create) creation of application tables
(Migrate_dt) Prep and run of the application table content migration.
olli
SAPup uses its own logic for table split calcultation
Tables and table parts are organized in buckets
Big tables are split into segments
The number of segments per table is number of buckets per table
No need for manual configuration
ak
Trouble shooting and Monitoring the migration:
SUM Utilities window offers an area to monitor the R3load processes: The process Buckets
monitor. It is even possible to reschedule a process that is in the status error. That way you
do not have to wait until all the packages are processed and an error message is displayed
on the SUM UI.
Tuning the DMO Procedure Downtime: In Contrast to the classical migration, DMO does not
require of offer sophisticated techniques to tune the migration speed. The steps to improve the
DMO procedure runtime are listed:
Use Benchmarking option before the DMO run for a quick test of migration part
Adjust the number of R3load processes during the benchmarking and DMO procedure( and learn for
next run)
Use the test cycle option(migration repetition option) this allows a fast repetition of only the DT
migration for a test run
Provide the migration duration for next run this provide the measures table migration duration for
table sequencing
Consider downtime optimized techniques this Downtime optimized DMO( SAP Business Suitr); Delta
queue cloning(SAP BW)
SUM RoadMap:
Extraction
1. In this it will prompt for user credentials like operating system user id and
password(sidadm and password) to communicate with hostagent so, that hostagent
olli
will start the Sapup process , and it will communicate with tp to establish
connectivity between the database
2. Provide the stack.xml file.
3. Select the target database for the connectivity.
4. Provide the ddic password to start the batch jobs
ak
5. Provide the OS SapService id for operating system services need.
6. Switch to the expert mode
7. Select the checkbox “consider customer buffer file” If there any transport requests
that are need to be included.
8. SPAM dialog will not prompt if the SPAM version is sufficient and no update is
ith
needed.
9. Select the option no table comparison(only row count)
10. Provide the migration key for R3 load
11. It will prompt a popup for the list of notes that are need to be apply, else it will
continue to the configuration phase.
Asr
12. If there are any errors during the extraction phase, you can troubleshoot the errors
by using the folders /SUM/abap/logSAPupconsole.log, error logs, SAPupstats.log
13. In tmp folder you can check the SAPupdialog.txt before this move to the log
directory.
14. SAPupDialog.txt exists if a dialog is open
15. Srv HTTP log files of SAPup
16. Doc/analysis UPGANA.XML file contains the information like timing, component
level and much more.
Configuration
1. In this we will do tool configurations.
2. Choose the Advanced option to increase the no of parallel process in the next
screen.
3. Provide the no of processes are required during the uptime and down time such as
ABAP(dialog/batch jobs), SQL, R3trans, R3load.
4. Optimal R3load processes will be 6, if parallel process then 12 R3load are required.
5. No of processes to be used for load and generation. Optimal 3.
6. If the SAPup process is not started, execute the manual command by using the
SAPup set procpar gt-scroll.
7. Give the target dbclient path to hdbinst for client installation.
8. Provide the password of the source DB schema user name and password of OS user.
9. Provide the target DB connection information such as DB hostname, SID, instance
number
10. Provide the target system license
11. Provide the password of target DB user SYSTEM
12. Provide the SYSTEMDB user SYSTEM password, if it is multi container.
13. Choose the target DB tenant SID.
14. Choose the SAP HANA Migration parameters: select the automatic load of table
olli
placement statements( selection starts uptime migration automatically).
15. Required Migration specific password for DBAcockpit user.
16. Provide Target DB schema SAPABAP1 user password.
17. Handling Addons and binding the SAP Support packages.(these are independent of
SAP database migration option)
ak
18. Binding the customer transports requests.
19. Binding the transport request for modification adjustment
20. Configuring the shadow system, Provide the freely available instance number
21. Provide the shadow Db user password.
22. If you want to save the shadow instance profiles, you can tick yes or else no. we can
save it in the folder /SUM/abap/save
ith
23. Completed the configuration Roadmap and continue the roadmap to checks.
Checks In this phase it will check for the any Inconsistency with the data and add ons.
1. If open modification adjustment activities from a previous run exist, SUM now
displays the status of SPDD and SPAU. Confirm all the obsolete notes and reset all
Asr
not adjusted all objects with active SAP version to SAP standard.
2. Here in this if there are any unresolved SPDD and SPAU from previous SUM run it
will prompt the popup for no of obsolete and non adjusted notes.
3. Checking the BW specific parameters and BW related housekeeping tasks will be
performed if there are any.
4. Delete unused BW data.
5. Delete temperory BW query, Sent BW query bookmarks, Old BW traces and BW
statistics older than days.(note that deleted data can not be restored by the DMO
reset).
6. Start ASU(Application Specific pre upgrade) in case of release change only. Use the
transaction code /ASU/UPGRADE before performing the upgrade in the source
system( it will reduce the down time and optimize/improve the performance of the
SUM Tool)
7. Successfully completed the roadmap checks continue with the preprocessing
roadmap.
8. Troubleshoot the SUM DMO by using the SUM observer mode by using the
url:(hostname:1128/lmsl/sumobserver/SID/monitor/index.html)
Preprocessing
1. The dialog about open repair is only shown if the transport includes objects that are
affected by the update of the system.
2. After sometime, the DMO procedure proposes to lock the development
environment. After this point , No further development or transports are allowed.
The shadow repository will be created now, and any changes on the repository
would not be included in the shadow repository, and thus will not be part of the
target system.
3. Lock the repository: no more development , no more SAP notes via SNOTE, no more
imports of transport request.
4. Perform scale up(single node) if necessary only use the attachement from the
olli
respective SAP Note, but ignore the content.
5. Dialog is displayed only, when no automatic load of table placement statements was
selected during Roadmap step configuration.
6. Untill now, the DMO procedure executed the uptime processing t, so the system
was still available for end users. From now the preparation for downtime has to be
ak
performed.
7. Administrator starts the downtime.
8. Before downtime starts take the backup SUM directory so, that we can be able to
restore the source DB and target DB, system directory and program directory.
9. Backup completed
10. Preprocessing Roadmap completed successfully and continue with the execution
ith
Roadmap.
Execution
1. The backup after downtime processing has to be triggered.
2. As per the screen the SUM tool, the downtime is finished
Asr
3. But it is not the end of the technical downtime and not the end of the business
downtime
4. Check for additional application servers, if exists
5. SAP System is unlocked (tp unlocksys) in order to perform SPAUN modification
adjustments and follow-up activities but no problem the end users are still locked
by the administrator
6. Succefully completed the execution road map and continues to the post processing
Roadmap.
PostprocessingThis dialog gives a clear guidance on when the system can be used for
manual post processing activities. All remaining phases after this dialog do not effect post
processing follow the guidance for importing transport requests
Start of cleanup processing
Confirm no imports are running
A few of the ‘101’ follow up activities are listed here.
olli
ak
ith
Asr