Zero Downtime Migration (ZDM) – Logical Offline Migration to Co-Managed Database Services

Introduction

Using Data Pump export and import is quite straightforward. Letting ZDM do it all for you (export, copy, import, and clean up) makes it possible to test your migration multiple times for no additional effort, and then when you are ready for the real migration, you are sure that exactly the same steps are executed and reduce the risk of making any mistakes.

This blog post provides step-by-step instructions on how to accordingly configure ZDM for Data Pump export and import with the minimal parameters required when migrating to Database Cloud Service or Exadata Cloud Service.

The Environment

We will use:

  • The ZDM host is as described in this blog post.
  • Oracle database version 19.12 installed on a VM with IP 10.0.0.165 and hostname onpremdb as the source database.
  • VM DB System version 19.12 with IP 10.0.0.20 and hostname clouddb as the target database.

The same procedure can be used to migrate to BM DB Systems, Exadata Cloud Service, and Exadata Cloud@Customer, while for the latter you will probably use the local file system or NFS as storage medium instead of Cloud Object Storage.

Migration

Step 1: Prepare the source database host

Copy the SSH public key for zdmuser from the ZDM host (created in Part 1, post task 1) to the .ssh/authorized_keys file on the source database host for the user you want to use for login, in this case opc:

#on ZDM host as zdmuser
[zdmuser@zdmhost ~]$ cat .ssh/id_rsa.pub
#on the source database host as user opc
[opc@onpremdb ~]$ vi .ssh/authorized_keys
#insert the public key and save the changes

Step 2: Prepare the source database

As SYS user:

SQL> alter system set streams_pool_size = 64M;

System altered.

SQL> alter user SYSTEM identified by <Your_SYSTEM_PW>;

User altered.

SQL> grant DATAPUMP_EXP_FULL_DATABASE to system;

Grant succeeded.

Step 3: Prepare the target database host

Copy the SSH public key for zdmuser from the ZDM host (created in Part 1, post task 1) to the .ssh/authorized_keys file on the target database host for the user you want to use for login, in this case opc:

#on ZDM host as zdmuser
[zdmuser@zdmhost ~]$ cat .ssh/id_rsa.pub
#on the source database host as user opc
[opc@clouddb ~]$ vi .ssh/authorized_keys
#insert the public key and save the changes

Step 4: Prepare the target database

As SYS user:

SQL> alter system set streams_pool_size = 2G;

System altered.

SQL> alter user SYSTEM identified by <Your_SYSTEM_PW>;

User altered.

SQL> grant DATAPUMP_IMP_FULL_DATABASE to system;

Grant succeeded.

Step 5: Prepare the DZM host

Add the servers hostname and IP information into the /etc/hosts file. As root user:

[root@zdmhost ~]# vi /etc/hosts
#add the following entries
10.0.2.247 zdmhost.pubsubnetlb.vcnfra.oraclevcn.com  zdmhost
10.0.2.185 onpremdb
10.0.0.20 clouddb

Verify that TTY is disabled for the SSH privileged user. If TTY is disabled, the following command returns the date from the remote host without any errors:

[zdmuser@zdmhost ~]$ ssh -i /home/zdmuser/.ssh/id_rsa opc@onpremdb "/usr/bin/sudo /bin/sh -c date"
Fri Oct  8 07:25:17 GMT 2021

[zdmuser@zdmhost ~]$ ssh -i /home/zdmuser/.ssh/id_rsa opc@clouddb "/usr/bin/sudo /bin/sh -c date"
Fri Oct  8 07:25:44 UTC 2021

Step 6: Create the ZDM response file on the ZDM host

You’ll find a template at $ZDMHOME/rhp/zdm/template/zdm_logical_template.rsp on the ZDM host that contains a brief description of the parameters and their possible values. Here we will create a new response file with the minimal parameters required. As zdmuser:

[zdmuser@zdmhost ~]$ vi /home/zdmuser/logical_offline_dbcs.rsp

# migration method
MIGRATION_METHOD=OFFLINE_LOGICAL
DATA_TRANSFER_MEDIUM=OSS
# data pump
DATAPUMPSETTINGS_JOBMODE=SCHEMA
INCLUDEOBJECTS-1=owner:HR
INCLUDEOBJECTS-2=owner:zdmmig
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_TABLESPACE,oldValue:USERS,newValue:DATA
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME=oci_core_emea
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME=dumps
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_CREATEAUTHTOKEN=FALSE
DATAPUMPSETTINGS_DELETEDUMPSINOSS=TRUE
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=DATA_PUMP_DIR
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=DATA_PUMP_DIR
# on source and target db: select directory_path from dba_directories where directory_name = 'DATA_PUMP_DIR';
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/admin/ORCL/dpdump/CD0CA4244B584339E05500001707684D
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/admin/PDB1/dpdump/CD338196B3207DE9E0531400000AE4A2
# source db
SOURCEDATABASE_CONNECTIONDETAILS_HOST=onpremdb
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=orclpdb
SOURCEDATABASE_ADMINUSERNAME=SYSTEM
# target db
TARGETDATABASE_OCID=ocid1.database.oc1.eu-frankfurt-1...
TARGETDATABASE_CONNECTIONDETAILS_HOST=clouddb
TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=pdb1.publicsubnet.vcnfra.oraclevcn.com
TARGETDATABASE_ADMINUSERNAME=SYSTEM
# oci cli
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=7f:07:9b:29:f9:90:e3:45:dd:27:6d:09:56:70:eb:e9
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1

Set DATAPUMPSETTINGS_DELETEDUMPSINOSS=FALSE to keep the dump files on Object Storage after migration.

Step 7: Evaluate the configuration

On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ $ZDMHOME/bin/zdmcli migrate database \
-rsp logical_offline_dbcs.rsp \
-sourcenode onpremdb \
-sourcesid ORCL \
-srcauth zdmauth \
-srcarg1 user:opc \
-srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa \
-srcarg3 sudo_location:/usr/bin/sudo \
-targetnode clouddb \
-tgtauth zdmauth \
-tgtarg1 user:opc \
-tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa \
-tgtarg3 sudo_location:/usr/bin/sudo \
-eval

Enter source database administrative user "SYSTEM" password:
Enter target database administrative user "SYSTEM" password:
Enter Authentication Token for OCI user "ocid1.user.oc1...":
Operation "zdmcli migrate database" scheduled with the job ID "4".

If the source database is using ASM for storage management, then use -sourcedb <db_unique_name> instead of -sourcesid <SID> in the zdmcli command.

Check the job status. On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ $ZDMHOME/bin/zdmcli query job -jobid 4

Job ID: 4
User: zdmuser
Client: zdmhost
Job Type: "EVAL"
...
Current status: EXECUTING
Current Phase: "ZDM_VALIDATE_TGT"
Result file path: "/home/zdmuser/zdmbase/chkbase/scheduled/job-4-2021-10-08-07:55:09.log"
...
ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_VALIDATE_TGT ...................... STARTED
ZDM_SETUP_SRC ......................... PENDING
ZDM_PRE_MIGRATION_ADVISOR ............. PENDING
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... PENDING
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... PENDING
ZDM_PREPARE_DATAPUMP_SRC .............. PENDING
ZDM_DATAPUMP_ESTIMATE_SRC ............. PENDING
ZDM_CLEANUP_SRC ....................... PENDING

Wait until all phases are completed. To repeat the check every 10 seconds:

[zdmuser@zdmhost ~]$ while :; do $ZDMHOME/bin/zdmcli query job -jobid 4; sleep 10; done

ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_SETUP_SRC ......................... COMPLETED
ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED
ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED
ZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED
ZDM_CLEANUP_SRC ....................... COMPLETED

Step 8: Initiate the migration

Execute the same command for evaluation, but this time without the -eval parameter. On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ $ZDMHOME/bin/zdmcli migrate database \
-rsp logical_offline_dbcs.rsp \
-sourcenode onpremdb \
-sourcesid ORCL \
-srcauth zdmauth \
-srcarg1 user:opc \
-srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa \
-srcarg3 sudo_location:/usr/bin/sudo \
-targetnode clouddb \
-tgtauth zdmauth \
-tgtarg1 user:opc \
-tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa \
-tgtarg3 sudo_location:/usr/bin/sudo

Enter source database administrative user "SYSTEM" password:
Enter target database administrative user "SYSTEM" password:
Enter Authentication Token for OCI user "ocid1.user.oc1...":
Operation "zdmcli migrate database" scheduled with the job ID "7".

Check the job status. On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ while :; do $ZDMHOME/bin/zdmcli query job -jobid 7; sleep 10; done

Job ID: 7
User: zdmuser
Client: zdmhost
Job Type: "MIGRATE"
...
Current status: EXECUTING
Current Phase: "ZDM_DATAPUMP_EXPORT_SRC"
Result file path: "/home/zdmuser/zdmbase/chkbase/scheduled/job-7-2021-10-08-10:00:39.log"...
ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_SETUP_SRC ......................... COMPLETED
ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED
ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED
ZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED
ZDM_PREPARE_DATAPUMP_TGT .............. COMPLETED
ZDM_DATAPUMP_EXPORT_SRC ............... STARTED
ZDM_UPLOAD_DUMPS_SRC .................. PENDING
ZDM_DATAPUMP_IMPORT_TGT ............... PENDING
ZDM_POST_DATAPUMP_SRC ................. PENDING
ZDM_POST_DATAPUMP_TGT ................. PENDING
ZDM_POST_ACTIONS ...................... PENDING
ZDM_CLEANUP_SRC ....................... PENDING

Check your target database once the migration is completed. It will contain the schemas and data from the source database.

The dump files will also be cleaned up from the data pump directories on the source and target hosts. The data pump log files however will still be there so you can have a look at them.

Log Files

In case of any issue, check the following log files:

#job log file on the zdm host
[zdmuser@zdmhost ~]$ view /home/zdmuser/zdmbase/chkbase/scheduled/job-<job_id>-<data>.log

#ZDM log file on the zdm host
[zdmuser@zdmhost ~]$ view /home/zdmuser/zdmbase/crsdata/zdmhost/rhp/zdmserver.log.0

#data pump export file on the source database host
[oracle@onpremdb ~]$ ls -l /u01/app/oracle/admin/ORCL/dpdump/CD0CA4244B584339E05500001707684D

#data pump import file on the target database host
[oracle@clouddb ~]$ ls -l /u01/app/oracle/admin/PDB1/dpdump/CD338196B3207DE9E0531400000AE4A2

Conclusion

After investing some work in the setup, all steps are done for you in one click: export from source, move to Object Storage, copy from Object Storage to the target host, import into the target database, and clean up the dump files.

ZDM offers a wide range of options that you might need for more flexibility and control. Have a look at the documentation for the complete list of available parameters.

Further Reading

Would you like to get notified when the next post is published?