In this part, we will discuss the database characteristics and factors that affect choosing a migration method.
Some of the migration methods apply only if specific characteristics of the source and destination database match or are compatible.
Logical vs. Physical Migrations
Logical migration means extracting the database objects and data from the source database and creating them again on the target database, e.g. using Data Pump. From the view of the end-user, the data are the same, and SELECT statements will return the same results, but the data would be very probably organized differently in data blocks and data files on disk than on the source database.
Using a physical migration method, the data blocks and data files on disk are copied as they are to the target system, e.g. using transportable tablespaces or PDB unplug/plug. This method might be much faster especially for larger databases.
As physical migration moves the data files or their backups, it requires access to the operating system of the target database machine. It also might require access to the root container in multitenant architecture.
Depending on your target database system in the cloud, you might be able to choose logical and/or physical migration methods:
- DBCS as target: logical or physical, as access to the underlying database infrastructure and root container databases is possible.
- Autonomous at target: logical, as there is no access to the operating system of the database machine and the root container database.
Database Version (11g, 12c, 18c, 19c)
The database version of your source and target database will impact the migration method to choose. If your target database has a higher release version than the source, then you have to choose a logical migration method or upgrade your database before or after the migration in case of a physical migration.
Some migration methods, e.g. Data Guard, require the same release version of the source and target database. If you want to use Data Guard, e.g. to achieve (near) zero downtime migration, you might need to upgrade your source database first to match your desired release version of the target database. For database upgrade, have a look on the AutoUpgrade Tool Doc ID 2485457.1
Migrating to the same release version but to a higher patch level, e.g. from 19.6 to 19.7, will require to run datapatch on the target database after the migration.
It is recommended to migrate your database to 19c, as it is the long term support release. One other advantage of keeping your database up to date is the variety of migration methods for higher release versions. The higher the release version, the more migration methods you can choose from. Migration methods using multitenant features like PDB unplug/plug or PDB cloning are available from release 12.1 onwards.
Database Edition (Standard, Enterprise)
Some migration methods are only applicable for Enterprise Edition databases, e.g. Data Guard.
It is possible to migrate SE to EE (and vice versa) using unplug and plugin multitenant environments.
Database Options (Advanced Compression, Encryption, …)
Database options used on your source database must be a subset of the options on the target. This affects the database service and edition to choose in the cloud as discussed in part 2.
The good news about encryption is, all databases in Oracle cloud, including Standard Edition ones, use Transparent Data Encryption (TDE) to encrypt data at rest.
Database Architecture (non-CDB, Multitenant)
Keep in mind that Oracle databases from 12.1 onwards in the Oracle cloud use the multitenant architecture. Migrating to a non-CDB is still possible by using the Exadata Cloud service. Anyway, non-CDB will be de-supported in 20c. So it might be a good idea to seize the opportunity and migrate your non-CDB to multitenant architecture. Multitenant migration methods like unplug/plug und cloning support the scenario of migrating a non-CDB to a PDB.
Database Platform (Little vs. Big Endian Format)
All Oracle databases in the Oracle cloud run on Oracle Linux, which is a little-endian format. Therefore, if the on-premise database runs on a big-endian platform, you have to choose a physical migration method that allows conversion, or go for a logical migration, because as discussed, by the logical migration you transport the data, not the physical data blocks.
Be aware that you cannot transport an encrypted tablespace to a platform with different endianness.
Character Set (UTF8, ISO8859P1, …)
In case your target database has a character set that is not compatible with the character set of the source database, then you have to choose a logical migration method.
Data Types (LONG, Multimedia, …)
Depending on the database cloud service you choose, there might be some restrictions regarding supported data types like LONG or multimedia in an autonomous database.
Source Database Availability (Online vs. Offline Migration)
Online migration means the source database still available and operational during data movement to the target system. If you are doing the final migration and using a migration method that requires downtime for your application, you would not worry much if the database would be offline at that time, because the application is stopped anyway.
However, an online migration method allows you to test your migration (several times) without impacting the availability of your source database. And testing is definitely something you should do beforehand.
Is a virtual machine or a PDB on a shared infrastructure compliant with your regulatory requirements? Or do you need dedicated hardware? These affect directly the kind of database service you have to choose, which then affects the migration method.
You also have to consider the performance requirements of your database. Depending on the number of CPUs needed and if online scaling should be possible, you have to choose the appropriate database cloud service that fulfills your requirements.
And last but not least: downtime! One of the most discussed topics when it comes to database migrations! Downtime depends on many factors like:
- Database size: obviously, the larger the database, the longer the time needed to move the data from one place to another.
- Network Bandwidth: depends on your network connectivity using the public internet, VPN, or FastConnect. For a very large amount of data, it is possible to use Data Transfer Appliance in case moving the data over the network cannot be considered.
- Parallelism: using parallelism would speed up your data movement in most cases.
- Compression: compression and/or decompression of the data while migration means more time to be spent doing the task.
For this blog series, we will put the downtime in three categories:
- Long: time needed for moving the data “piece by piece”, like the time needed for Data Pump conventional export and import, or full backup and restore.
- Short: relates to the time needed to copy the data files over the network or to create and apply the last incremental backup.
- Zero, or time needed for switchover: no downtime at all depends on the application if it implements application continuity.
Choosing the right migration method depends on many characteristics of your source and target database. Also choosing the appropriate database cloud service depends on many factors and requirements. Before choosing a migration method, take a step back, validate all your needs, requirements, and be aware of the characteristics of your source and target database systems.