
Introduction
Enabling Oracle Data Guard in Oracle Cloud is a matter of a few clicks in the Cloud Console. However, many customers prefer to manage their resource provisioning in an automated way using Terraform, OCI CLI, SDKs, or REST API for more consistency and efficiency.
Providing the correct values in the OCI CLI command to enable cross-region Oracle Data Guard seems to be a bit challenging. This blog post explains how to enable cross-region Oracle Data Guard for the Base Database Service using OCI CLI.
The Environment
- A VCN in the Oracle Cloud Frankfurt region, where the primary VM DB System is created.
- A VCN in the Oracle Cloud London region, where the standby database will be created.
Preparation
Step 1: Establish Remote VCN Peering between Frankfurt and London
Remote VCN Peering connects VCNs in different regions to allow resources to communicate using private IP addresses over the Oracle backbone network without routing the traffic over the internet.
Follow the documentation to peer your two VCNs in the different regions.
Step 2: Enable network traffic to flow between the VCNs
Open TCP port 1521 to allow Oracle Data Guard Redo Transport to flow between the primary and standby database.
Add the appropriate Security List and Route Table rules to your subnets in both regions to allow TCP traffic over port 1521 in both directions.
Check the documentation for an example of ingress and egress rules.
Enable Data Guard
Step 3: Create a Standby Database in the remote region using OCI CLI
After the VCNs are peered and network traffic is allowed to flow between the VCNs, it’s time to use OCI CLI to enable cross-region Data Guard.
You can use Cloud Shell or install OCI CLI on a VM and authenticate to be able to execute OCI CLI commands.
The OCI CLI command needed is
oci db data-guard-association create with-new-db-system [OPTIONS]
as described here with the following required parameters:
oci db data-guard-association create with-new-db-system \
--availability-domain "AAef:UK-LONDON-1-AD-1" \
--creation-type "NewDbSystem" \
--database-admin-password "Your_SYS_Password" \
--database-id "ocid1.database.oc1.eu-frankfurt-1.xxx" \
--display-name "DBCSLHR" \
--hostname "hostlhr" \
--protection-mode "MAXIMUM_PERFORMANCE" \
--subnet-id "ocid1.subnet.oc1.uk-london-1.xxx" \
--transport-type "ASYNC" \
--cpu-core-count 1 \
--region eu-frankfurt-1 \
--debug
Pay attention to the following parameters:
- availability-domain is the Availability Domain (AD) name in the region where the standby DB System will be created (here London). To find out the AD names, execute the following command in the corresponding region or add the region name: oci iam availability-domain list –region uk-london-1
- database-id is the OCID of the DATABASE, not the DB System, that will become the primary (here in Frankfurt region). The OCID begins with ocid1.database.
- subnet-id is the OCID of the subnet where the standby DB System will be created (here the subnet in London region).
- cpu-core-count is the number of OCPUs of the standby DB System and can be different from the primary (sometimes lower than the primary to save costs). This parameter is mandatory even though the documentation above lists it as optional.
- region is the region name of the database that will become the primary (here Frankfurt). This parameter is optional if you run the CLI command in the primary region (Frankfurt). My recommendation is to always include the region name, so it does not matter where you run the command.
- If you don’t include the region name and don’t run it in the primary region (Frankfurt), you’ll get “Authorization failed or requested resource not found” indicating that database, which is in Frankfurt, cannot be found.
- debug is optional and can be used to find more debug information if you are running into issues.
After successful command execution, you will get a response similar to the following:
{
"data": {
"apply-lag": null,
"apply-rate": null,
"database-id": "ocid1.database.oc1.eu-frankfurt-1.xxx",
"id": "ocid1.dgassociation.oc1.eu-frankfurt-1.xxx",
"is-active-data-guard-enabled": true,
"lifecycle-details": null,
"lifecycle-state": "PROVISIONING",
"peer-data-guard-association-id": null,
"peer-database-id": null,
"peer-db-home-id": null,
"peer-db-system-id": null,
"peer-role": "STANDBY",
"protection-mode": "MAXIMUM_PERFORMANCE",
"role": "PRIMARY",
"time-created": "2023-06-19T12:30:19.111000+00:00",
"transport-type": "ASYNC"
},
"etag": "7a0937ce",
"opc-work-request-id": "ocid1.coreservicesworkrequest.oc1.eu-frankfurt-1.xxx"
}
Step 4: Query the provisioning status
The id from the above output is the Data Guard Association OCID that can be used to query the request status:
oci db data-guard-association get \
--data-guard-association-id "ocid1.dgassociation.oc1.eu-frankfurt-1.xxx" \
--database-id "ocid1.database.oc1.eu-frankfurt-1.xxx" \
--region eu-frankfurt-1
The output will be similar to the above. Pay attention to the lifecycle state:
{
"data": {
...
"lifecycle-state": "PROVISIONING",
"peer-data-guard-association-id": null,
"peer-database-id": null,
"peer-db-home-id": null,
"peer-db-system-id": null,
...
},
...
}
The data of the peer (standby) will become available as soon as the provisioning is completed:
{
"data": {
...
"lifecycle-state": "AVAILABLE",
"peer-data-guard-association-id": "ocid1.dgassociation.oc1.uk-london-1.xxx",
"peer-database-id": "ocid1.database.oc1.uk-london-1.xxx",
"peer-db-home-id": "ocid1.dbhome.oc1.uk-london-1.xxx",
"peer-db-system-id": "ocid1.dbsystem.oc1.uk-london-1.xxx",
...
},
...
}
As only one Data Guard standby is supported via Cloud Tooling as of today, you could also use the data-guard-association list command to get the same information, by only providing the database OCID:
oci db data-guard-association list \
--database-id "ocid1.database.oc1.eu-frankfurt-1.xxx" \
--region eu-frankfurt-1
Considerations
- For Base Database Service, the primary and standby databases must be in VCNs that are directly peered. Hub-and-Spoke network architecture is not yet supported. If you have a hub-and-spoke layout, you can go with a manual Data Guard configuration.
- The database must use Oracle-managed encryption keys (local TDE wallet). Cross-region Data Guard for customer-managed encryption keys (OCI Vault) is not yet supported.
- There is a cost associated with egress network traffic across regions. However, the first 10TB/month are for free.
- For RAC DB Systems, do not use a subnet that overlaps with 192.168.16.16/28.
- For RAC DB Systems, you need a minimum of two OCPUs per node, hence the minimal value for the cpu-core-count parameter is four.
Conclusion
Enabling cross-region Data Guard using the Cloud Console requires a few clicks in the UI. If an automated way is preferred, OCI CLI offers an easy-to-use interface to accomplish the task. For cross-region Data Guard, the required parameters might appear a bit tricky in the first place. After you are aware of the correct values to provide, it’s as easy as ABC.
Further Reading
- Documentation: Use Oracle Data Guard on a DB System
- Why and How to install OCI CLI in a Virtual Environment on Oracle Cloud
- Three Ways to Authenticate OCI CLI in Oracle Cloud
- How to measure Network Latency for Oracle Data Guard Replication
- How to create a Single Service Name in Data Guard Environments on Oracle Cloud
