Zero Downtime Migration (ZDM) – Logical Online Migration to VM DB Systems

Introduction

Configuring GoldenGate for database migrations might be complex and require deep knowledge and expertise in GoldenGate. Zero Downtime Migration (ZDM) requires only a response file with the needed parameters. It configures and runs everything for you automatically to achieve (near) zero downtime while migrating your Oracle database to Oracle Cloud.

This blog post provides step-by-step instructions on configuring ZDM to use GoldenGate with the minimal parameters required.

The Environment

We will use:

  • The ZDM host is as described in this blog post.
  • Oracle database version 19.12 installed on a VM with IP 10.0.0.165 and hostname onpremdb as the source database.
  • VM DB System version 19.12 with IP 10.0.0.20 and hostname clouddb as the target database.
  • GoldenGate from Oracle Cloud Marketplace: search for “Marketplace” in “Applications”, search for “GoldenGate”, choose “Oracle GoldenGate for Oracle”, use the Microservices Edition and continue to provision a compute VM where GoldenGate will be automatically installed on it. I chose gghost as host name. The IP address is 10.0.0.240.

The same procedure can be used to migrate to Exadata Cloud Service and Exadata Cloud@Customer (ExaCC). For ExaCC, you will probably use the local file system or NFS as a storage medium instead of Cloud Object Storage.

Migration

Step 1: Prepare the source database host

Copy the SSH public key for zdmuser from the ZDM host (created in Part 1, post-task 1) to the .ssh/authorized_keys file on the source database host for the user you want to use for login, in this case, opc:

#on ZDM host as zdmuser
[zdmuser@zdmhost ~]$ cat .ssh/id_rsa.pub
#on the source database host as user opc
[opc@onpremdb ~]$ vi .ssh/authorized_keys
#insert the public key and save the changes

Step 2: Prepare the source database

As SYS user:

---- DataPump
SQL> alter system set streams_pool_size = 2G;

System altered.

SQL> alter user SYSTEM identified by <Your_SYSTEM_PW>;

User altered.

SQL> grant DATAPUMP_EXP_FULL_DATABASE to system;

Grant succeeded.

---- GoldenGate
-- Set global_names to false
SQL> alter system set global_names=false;
 
-- Enable ARCHIVELOG mode:
SQL> select log_mode from v$database;
SQL> shutdown immediate;
SQL> startup mount
SQL> alter database archivelog;
SQL> alter database open;
SQL> select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG
 
-- Enable FORCE LOGGING to ensure that all changes are found in the redo by the Oracle GoldenGate Extract process:
SQL> select force_logging from v$database;
SQL> alter database force logging;
SQL> select force_logging from v$database;
FORCE_LOGGING
---------------------------------------
YES
 
-- Enable database minimal supplemental logging:
SQL> alter database add supplemental log data;
SQL> select minimal from dba_supplemental_logging;
MINIMAL
----------
YES
 
-- Enable initialization parameter ENABLE_GOLDENGATE_REPLICATION:
SQL> alter system set ENABLE_GOLDENGATE_REPLICATION=TRUE scope=both;
System altered.
 
-- In case of Multitenant, create the user c##ggadmin in CDB$ROOT:
SQL> create user c##ggadmin identified by <Your_GGADMIND_PW> default tablespace users temporary tablespace temp;
SQL> grant connect, resource to c##ggadmin;
SQL> grant unlimited tablespace to c##ggadmin;
SQL> grant select any dictionary to c##ggadmin;
SQL> grant create view to c##ggadmin;
SQL> grant execute on dbms_lock to c##ggadmin;
SQL> exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('c##ggadmin',container=>' all');
 
-- Create a GoldenGate administration user, ggadmin (in the PDB in case of Multitenant):
SQL> alter session set container=ORCLPDB;
SQL> create user ggadmin identified by <Your_GGADMIND_PW> default tablespace users temporary tablespace temp;
SQL> grant connect, resource to ggadmin;
SQL> grant unlimited tablespace to ggadmin;
SQL> grant select any dictionary to ggadmin;
SQL> grant create view to ggadmin;
SQL> grant execute on dbms_lock to ggadmin;
SQL> exec dbms_goldengate_auth.GRANT_ADMIN_PRIVILEGE('ggadmin');

Step 3: Prepare the target database host

Copy the SSH public key for zdmuser from the ZDM host (created in Part 1, post-task 1) to the .ssh/authorized_keys file on the target database host for the user you want to use for login, in this case, opc:

#on ZDM host as zdmuser
[zdmuser@zdmhost ~]$ cat .ssh/id_rsa.pub
#on the source database host as user opc
[opc@clouddb ~]$ vi .ssh/authorized_keys
#insert the public key and save the changes

Step 4: Prepare the target database

As SYS user:

---- DataPump
SQL> alter system set streams_pool_size = 2G;

System altered.

SQL> alter session set container=PDB1;

Session altered.

SQL> alter user SYSTEM identified by <Your_SYSTEM_PW>;

User altered.

SQL> grant DATAPUMP_IMP_FULL_DATABASE to system;

Grant succeeded.

---- GoldenGate
-- Enable initialization parameter ENABLE_GOLDENGATE_REPLICATION:
SQL> alter system set ENABLE_GOLDENGATE_REPLICATION=TRUE scope=both;
System altered.

-- Create a user for GoldenGate (in PDB)
SQL> alter session set container=PDB1;
SQL> create user ggadmin identified by <GG_ADMIN_PW> default tablespace users temporary tablespace temp;
SQL> grant connect, resource to ggadmin;
SQL> grant unlimited tablespace to ggadmin;
SQL> grant select any dictionary to ggadmin;
SQL> grant create view to ggadmin;
SQL> grant insert any table to ggadmin;
SQL> grant update any table to ggadmin;
SQL> grant delete any table to ggadmin;
SQL> grant execute on dbms_lock to ggadmin;
SQL> exec dbms_goldengate_auth.grant_admin_privilege('ggadmin');

Step 5: Get the GoldenGate credentials

Log in to the GoldenGate host via SSH and get the credentials for the oggadmin user:

-bash-4.2$ cat /home/opc/ogg-credentials.json
{"username": "oggadmin", "credential": "H0rBCgbNf%svIkAB"}

Step 6: Prepare the DZM host

Add the server’s hostname and IP information into the /etc/hosts file. As root user:

[root@zdmhost ~]# vi /etc/hosts
#add the following entries
10.0.2.247 zdmhost.pubsubnetlb.vcnfra.oraclevcn.com  zdmhost
10.0.2.185 onpremdb
10.0.0.20 clouddb
10.0.0.240 gghost

Verify that TTY is disabled for the SSH privileged user. If TTY is disabled, the following command returns the date from the remote host without any errors:

[zdmuser@zdmhost ~]$ ssh -i /home/zdmuser/.ssh/id_rsa opc@onpremdb "/usr/bin/sudo /bin/sh -c date"
Fri Oct  8 07:25:17 GMT 2021

[zdmuser@zdmhost ~]$ ssh -i /home/zdmuser/.ssh/id_rsa opc@clouddb "/usr/bin/sudo /bin/sh -c date"
Fri Oct  8 07:25:44 UTC 2021

Step 7: Create the ZDM response file on the ZDM host

You’ll find a template at $ZDMHOME/rhp/zdm/template/zdm_logical_template.rsp on the ZDM host, which contains a brief description of the parameters and their possible values. Here we will create a new response file with the minimum parameters required. As zdmuser:

[zdmuser@zdmhost ~]$ vi /home/zdmuser/logical_online_dbcs.rsp

# migration method
MIGRATION_METHOD=ONLINE_LOGICAL
DATA_TRANSFER_MEDIUM=OSS
# data pump
DATAPUMPSETTINGS_JOBMODE=SCHEMA
INCLUDEOBJECTS-1=owner:HR
DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_TABLESPACE,oldValue:USERS,newValue:DATA
DATAPUMPSETTINGS_DATABUCKET_NAMESPACENAME=oci_core_emea
DATAPUMPSETTINGS_DATABUCKET_BUCKETNAME=dumps
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_DATAPUMPPARAMETERS_IMPORTPARALLELISMDEGREE=2
DATAPUMPSETTINGS_CREATEAUTHTOKEN=FALSE
DATAPUMPSETTINGS_DELETEDUMPSINOSS=TRUE
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_NAME=DATA_PUMP_DIR
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_NAME=DATA_PUMP_DIR
# on source and target db: select directory_path from dba_directories where directory_name = 'DATA_PUMP_DIR';
DATAPUMPSETTINGS_EXPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/admin/ORCL/dpdump/CD0CA4244B584339E05500001707684D
DATAPUMPSETTINGS_IMPORTDIRECTORYOBJECT_PATH=/u01/app/oracle/admin/PDB1/dpdump/CD338196B3207DE9E0531400000AE4A2
# source pdb
SOURCEDATABASE_CONNECTIONDETAILS_HOST=onpremdb
SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=orclpdb
SOURCEDATABASE_ADMINUSERNAME=SYSTEM
SOURCEDATABASE_GGADMINUSERNAME=ggadmin
# source cdb
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_HOST=onpremdb
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_PORT=1521
SOURCECONTAINERDATABASE_CONNECTIONDETAILS_SERVICENAME=ORCL
SOURCECONTAINERDATABASE_ADMINUSERNAME=SYSTEM
SOURCECONTAINERDATABASE_GGADMINUSERNAME=c##ggadmin
# target db
TARGETDATABASE_OCID=ocid1.database.oc1.eu-frankfurt-1...
TARGETDATABASE_CONNECTIONDETAILS_HOST=clouddb
TARGETDATABASE_CONNECTIONDETAILS_PORT=1521
TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME=pdb1.publicsubnet.vcnfra.oraclevcn.com
TARGETDATABASE_ADMINUSERNAME=SYSTEM
TARGETDATABASE_GGADMINUSERNAME=ggadmin
# oci cli
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1...
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=7f:07:9b:29:f9:90:e3:45:dd:27:6d:09:56:70:eb:e9
OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/zdmuser/.oci/oci_api_key.pem
OCIAUTHENTICATIONDETAILS_REGIONID=eu-frankfurt-1
# GoldenGate
GOLDENGATEHUB_ADMINUSERNAME=oggadmin
GOLDENGATEHUB_SOURCEDEPLOYMENTNAME=Source
GOLDENGATEHUB_TARGETDEPLOYMENTNAME=Target
#use the FQDN (not the IP) of the GoldenGate host
GOLDENGATEHUB_URL=https://gghost.publicsubnet.vcnfra.oraclevcn.com
GOLDENGATEHUB_COMPUTEID=ocid1.instance.oc1.eu-frankfurt-1...

Set DATAPUMPSETTINGS_DELETEDUMPSINOSS=FALSE to keep the dump files on Object Storage after migration.

Step 8: Evaluate the configuration

On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ $ZDMHOME/bin/zdmcli migrate database \
-rsp logical_online_dbcs.rsp \
-sourcenode onpremdb \
-sourcesid ORCL \
-srcauth zdmauth \
-srcarg1 user:opc \
-srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa \
-srcarg3 sudo_location:/usr/bin/sudo \
-targetnode clouddb \
-tgtauth zdmauth \
-tgtarg1 user:opc \
-tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa \
-tgtarg3 sudo_location:/usr/bin/sudo \
-eval

Enter source database administrative user "SYSTEM" password:
Enter source database administrative user "ggadmin" password:
Enter source container database administrative user "SYSTEM" password:
Enter source container database administrative user "c##ggadmin" password:
Enter target database administrative user "SYSTEM" password:
Enter target database administrative user "ggadmin" password:
Enter Oracle GoldenGate hub administrative user "oggadmin" password:
Enter Authentication Token for OCI user "ocid1.user.oc1...":
Operation "zdmcli migrate database" scheduled with the job ID "11".

If the source database is using ASM for storage management, then use -sourcedb <db_unique_name> instead of -sourcesid <SID> in the zdmcli command.

Check the job status. On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ $ZDMHOME/bin/zdmcli query job -jobid 11

Job ID: 11
User: zdmuser
Client: zdmhost
Job Type: "EVAL"
...
Current status: EXECUTING
Current Phase: "ZDM_SETUP_SRC"
Result file path: "/home/zdmuser/zdmbase/chkbase/scheduled/job-11-2021-10-10-15:08:39.log"
...
ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_SETUP_SRC ......................... STARTED
ZDM_PRE_MIGRATION_ADVISOR ............. PENDING
ZDM_VALIDATE_GG_HUB ................... PENDING
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... PENDING
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... PENDING
ZDM_PREPARE_DATAPUMP_SRC .............. PENDING
ZDM_DATAPUMP_ESTIMATE_SRC ............. PENDING
ZDM_CLEANUP_SRC ....................... PENDING

Wait until all phases are completed. To repeat the check every 10 seconds:

[zdmuser@zdmhost ~]$ while :; do $ZDMHOME/bin/zdmcli query job -jobid 11; sleep 10; done

ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_SETUP_SRC ......................... COMPLETED
ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED
ZDM_VALIDATE_GG_HUB ................... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED
ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED
ZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED
ZDM_CLEANUP_SRC ....................... COMPLETED

Step 9: Initiate the migration

Execute the same command for evaluation, but this time without the -eval parameter. Pause the migration after the step ZDM_MONITOR_GG_LAG. This will cause the data replication to run continuously until the migration is resumed.

On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ $ZDMHOME/bin/zdmcli migrate database \
-rsp logical_online_dbcs.rsp \
-sourcenode onpremdb \
-sourcesid ORCL \
-srcauth zdmauth \
-srcarg1 user:opc \
-srcarg2 identity_file:/home/zdmuser/.ssh/id_rsa \
-srcarg3 sudo_location:/usr/bin/sudo \
-targetnode clouddb \
-tgtauth zdmauth \
-tgtarg1 user:opc \
-tgtarg2 identity_file:/home/zdmuser/.ssh/id_rsa \
-tgtarg3 sudo_location:/usr/bin/sudo \
-pauseafter ZDM_MONITOR_GG_LAG

Enter source database administrative user "SYSTEM" password:
Enter source database administrative user "ggadmin" password:
Enter source container database administrative user "SYSTEM" password:
Enter source container database administrative user "c##ggadmin" password:
Enter target database administrative user "SYSTEM" password:
Enter target database administrative user "ggadmin" password:
Enter Oracle GoldenGate hub administrative user "oggadmin" password:
Enter Authentication Token for OCI user "ocid1.user.oc1...":
Operation "zdmcli migrate database" scheduled with the job ID "13".

Check the job status. On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ while :; do $ZDMHOME/bin/zdmcli query job -jobid 13; sleep 10; done

Job ID: 13
User: zdmuser
Client: zdmhost
Job Type: "MIGRATE"
...
Current status: PAUSED
Current Phase: "ZDM_MONITOR_GG_LAG"
Result file path: "/home/zdmuser/zdmbase/chkbase/scheduled/job-13-2021-10-10-15:20:39.log"
...
ZDM_VALIDATE_SRC ...................... COMPLETED
ZDM_VALIDATE_TGT ...................... COMPLETED
ZDM_SETUP_SRC ......................... COMPLETED
ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED
ZDM_VALIDATE_GG_HUB ................... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED
ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED
ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED
ZDM_DATAPUMP_ESTIMATE_SRC ............. COMPLETED
ZDM_PREPARE_GG_HUB .................... COMPLETED
ZDM_ADD_HEARTBEAT_SRC ................. COMPLETED
ZDM_ADD_SCHEMA_TRANDATA_SRC ........... COMPLETED
ZDM_CREATE_GG_EXTRACT_SRC ............. COMPLETED
ZDM_PREPARE_DATAPUMP_TGT .............. COMPLETED
ZDM_DATAPUMP_EXPORT_SRC ............... COMPLETED
ZDM_UPLOAD_DUMPS_SRC .................. COMPLETED
ZDM_DATAPUMP_IMPORT_TGT ............... COMPLETED
ZDM_POST_DATAPUMP_SRC ................. COMPLETED
ZDM_POST_DATAPUMP_TGT ................. COMPLETED
ZDM_ADD_HEARTBEAT_TGT ................. COMPLETED
ZDM_ADD_CHECKPOINT_TGT ................ COMPLETED
ZDM_CREATE_GG_REPLICAT_TGT ............ COMPLETED
ZDM_MONITOR_GG_LAG .................... COMPLETED
ZDM_SWITCHOVER_APP .................... PENDING
ZDM_RM_GG_EXTRACT_SRC ................. PENDING
ZDM_RM_GG_REPLICAT_TGT ................ PENDING
ZDM_DELETE_SCHEMA_TRANDATA_SRC ........ PENDING
ZDM_RM_HEARTBEAT_SRC .................. PENDING
ZDM_RM_CHECKPOINT_TGT ................. PENDING
ZDM_RM_HEARTBEAT_TGT .................. PENDING
ZDM_CLEAN_GG_HUB ...................... PENDING
ZDM_POST_ACTIONS ...................... PENDING
ZDM_CLEANUP_SRC ....................... PENDING

Pay attention to the current job status. It is in PAUSED status now. Also, the progress stopped after phase ZDM_MONITOR_GG_LAG was COMPLETED.

At this stage, every change on the source database is immediately synchronized with the target database. Resume the job when your application is ready for the final migration.

Step 10: Complete the migration

Resume the job from the previous step. On the ZDM host as zdmuser:

[zdmuser@zdmhost ~]$ $ZDMHOME/bin/zdmcli resume job -jobid 13

Query the status again and wait until all phases are completed.

Log Files

In case of any issue, check the following log files:

#job log file on the zdm host
[zdmuser@zdmhost ~]$ view /home/zdmuser/zdmbase/chkbase/scheduled/job-<job_id>-<data>.log

#ZDM log file on the zdm host
[zdmuser@zdmhost ~]$ view /home/zdmuser/zdmbase/crsdata/zdmhost/rhp/zdmserver.log.0

#data pump export file on the source database host
[oracle@onpremdb ~]$ ls -l /u01/app/oracle/admin/ORCL/dpdump/CD0CA4244B584339E05500001707684D

#data pump import file on the target database host
[oracle@clouddb ~]$ ls -l /u01/app/oracle/admin/PDB1/dpdump/CD338196B3207DE9E0531400000AE4A2

#Extract and Replicate config on the GoldenGate host
/u02/deployments/Source/etc/conf/ogg/
/u02/deployments/Target/etc/conf/ogg/

Conclusion

After investing some work in the setup, all steps are done for you in one click: export from source, move to Object Storage, copy from Object Storage to the target host, import into the target database, and clean up the dump files. Additionally, GoldenGate is configured to keep the source and target databases in sync, providing a (near) zero downtime migration.

ZDM offers a wide range of options that you might need for more flexibility and control. Have a look at the documentation for the complete list of available parameters.

Further Reading

Would you like to get notified when the next post is published?