How to reduce Patching Time for Oracle Databases in the Cloud

Introduction

Lately, I got a question from a customer about why patching in the cloud takes too long.

First of all, we need to clarify things first and understand the exact situation: what Database Service is being used? What is the database version? Multitenant? How many PDBs are present in the CDB? How many CPUs does the database use? How is patching being done? How long does it take in minutes? Is it reproducible?

So let’s check the specific situation, do some tests, measure time, investigate the root cause, and see what can we do to speed up the patching time. With patching here, we are talking about applying Release Updates (RUs).

The Environment

  • Virtual Machine DB System with 4 OCPUs.
  • Enterprise Edition, Single Instance, Multitenant Database with 4 PDBs: 2 in Read/Write mode and 2 local refreshable clones in mounted state.
  • Patching from 19.10 to 19.11.

Situation

Patching is taking more than 60 minutes to complete. The customer is asking whether there is an option using the Cloud Tooling to skip the refreshable clones from patching to reduce the amount of time needed, similar to providing the PDB names when you run datapatch manually.

./datapatch -pdbs CDB\$ROOT,PDB1,PDB2   #skipping PDB$SEED, PDB3, and PDB4

This is an interesting question! And to be honest, I have never thought about it before! So time for investigation!

Tests

I provisioned my VM DB System as described above and applied the 19.11 patch using the Cloud Console.

To check the process, log in to your virtual machine as user root and use the dbcli command line to list the jobs and show job details.

sudu su -
dbcli list-jobs
dbcli describe-job -i <job_id>

Task Name                             Start Time                          End Time                            Status
------------------------------------- ----------------------------------- ----------------------------------- ----------
Pre-operations for DBHome patching    July 12, 2021 9:05:25 AM UTC        July 12, 2021 9:06:54 AM UTC        Success
DBHome patching                       July 12, 2021 9:06:54 AM UTC        July 12, 2021 9:50:26 AM UTC        Success
Post-operations for DBHome patching   July 12, 2021 9:50:26 AM UTC        July 12, 2021 9:56:28 AM UTC        Success

I also did this for 19.9 to 19.10, having 1 OCPU versus 4, and having 4 Read/Write PDBs versus 2 Read/Write plus 2 refreshable clones. Here are the results:

Investigation

From the test results, we get an average of 65.5 minutes. It is reproducible. And too long!!! Considering patching manually taking around 15-20 minutes.

Does the number of PDBs and refreshable clones in MOUNTED state matter? For this we have to know two things:

1. Datapatch skipps PDBs in MOUNTED state

Having a look at the log files, you’ll see:

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Warning: PDB PDB1REF is in mode MOUNTED and will be skipped.
Warning: PDB PDB2REF is in mode MOUNTED and will be skipped.

Indeed, a refreshable PDB – as you can’t open it in Read/Write mode to run datapatch anyway – will get updated via the redo reply when you run datapatch on the source PDB.

2. Datapatch runs in parallel for multiple PDBs

CDB$ROOT is patched first. Once it’s done, all other PDBs (including PDB$SEED) will be patched in parallel. To determine the degree of parallelism, many parameters are considered, but in practice, this would almost always be the cpu_count multiplied by 2.

For example, if we have a CDB with PDB$SEED and 31 user-created PDBs, 4 OCPUs (i.e. 8 vCPU, cpu_count=8), then the degree of parallelism is 8*2=16. Hence, datapatch will:

  1. Patch CDB$ROOT.
  2. Patch PDB$SEED and PDB1 to PDB15 in parallel (16 PDBs at a time).
  3. Patch PDB16 to PDB31 in parallel (16 PDBs at a time).

Considering that, having a few PDBs more or less in our tests shouldn’t impact the time needed for datapatch to complete.

But the question remains: why is it taking 65+ minutes on average??? Which is really too long!

Patching using Oracle Provided Images

If you carefully look at the test again, you’ll see that we were using Oracle Provided Images for patching.

And here is the thing: when using Oracle Provided Images, opatchauto is being used for in-place patching in the background. Opatchauto seems to have issues waiting too long for database restart (when having multiple PDBs). I didn’t invest much more time into this for two reasons: this might be changed in the near future, and it is not the recommended method anyway!

Patching using Custom Database Software Images (Recommended!)

It is recommended to use Custom Database Software Images for database creation and patching. Advantages:

  1. With Custom Images you have the option to add one-off patches to the image, so they are included in the software binaries, and you don’t have to apply them manually, saving a lot of time and effort.
  2. When enabling Data Guard, with Custom Images you make sure that exactly the same image will be used to create the standby database.

And now we have a third reason:

3. When using Custom Images for patching, out-of-place patching will be used instead of opatchauto, resulting in a patching time of around 25 minutes.

Even though this is a bit longer than manual patching, this is an acceptable time considering the Cloud Tooling doing some pre-checks beforehand and syncing the operation with the cloud control plane.

For more details check Using Custom Database Software Images to Create, Patch, and Upgrade your Cloud Databases.

Reduce Downtime to (near) Zero?

To make it short: use Oracle RAC, (Active) Data Guard, and Application Continuity.

Patching on Exadata Cloud Service (ExaCS)

In addition to that ExaCS runs Oracle RAC, so no database downtime occurs, the recommended way to patch databases on ExaCS is by using the Cloud Tooling functionality “Move to Another Home”, which is out-of-place patching by design. This is independent of whether the database home was created using Oracle Provided or Custom Images, even though Custom Images are still recommended for the other reasons mentioned.

For more details, check Using OCI CLI and Custom Database Software Images to Create, Patch, and Upgrade your Exadata Cloud Databases.

Conclusion

Database patching on VM DB Systems using Oracle Provided Database Software Images takes about 60+ minutes because it uses opatchauto for in-place patching, and apparently, this is causing an issue. However, the recommended way is to use Custom Database Software Images anyway, which runs out-of-place patching in the background, resulting in around 25 minutes for the patching operation to complete.

For zero downtime patching, use Oracle RAC, (Active) Data Guard, and Application Continuity.

On Exadata Cloud Service, it is possible to create multiple database homes and use the Clout Tooling “Move to Another Home” functionality for out-of-place patching, which is the recommended method. As ExaCS uses Oracle RAC by default, there is no database downtime associated with database patching.

Further Reading

Would you like to get notified when the next post is published?