Part 5/10: Migration Methods using Data Pump

In this part we will introduce the following migration methods using Oracle Data Pump export and import utilities:

  • Data Pump Conventional Export and Import.
  • Data Pump Full Transportable.
  • Data Pump Transportable Tablespace.
  • Data Pump Convert Full Transportable.

Data Pump

Oracle Data Pump technology enables you to move data and metadata from one database to another. Data Pump provides a very wide range of benefits:

  • Available on Oracle Database 10g release 1 (10.1) and later. The latest version, like 19c Data Pump Client, is compatible with any lower version up to 10g.
  • Support migrating databases across platforms with different endian formats, to a different database architecture (non-CDB to PDB), or between different release versions of the database software (10g to 19c).
  • Available for both Standard and Enterprise Edition databases.
  • Supports a full range of data types.
  • Flexibility to use five different modes of data unloading: Full database, specific tablesschemastablespaces, or transportable tablespaces. See FULLTABLESSCHEMASTABLESPACES, and TRANSPORT_TABLESPACES.
  • Supports encryption of data before writing it to the dump file set. It warns you when encrypted data is exported or imported as unencrypted data. See ENCRYPTION.
  • Supports parallelism to accelerate the export and import of a large amount of data. See PARALLEL.
  • Supports compression of output files to save space on the disk. See COMPRESSION.
  • Support for export and import operations over the network without the need for intermediate storage. See NETWORK_LINK.
  • Support for remapping tables, schemas, and tablespaces during an import operation. See REMAP_TABLEREMAP_SCHEMA, and REMAP_TABLESPACE.
  • Support for filtering the objects that are exported and imported, based upon objects and object types. see INCLUDE. and EXCLUDE.

After introducing Data Guard and Golden Gate in part 4 as the first two migration methods in our journey, we continue now with further migration methods using Data Pump:

3. Data Pump Conventional Export and Import

The migration steps consist of:

  1. Invoking Data Pump export on-premise database to generate the dump files that include the data and metadata.
  2. Securely copy the dump files to the OCI database system.
  3. On the OCI database system, invoking Data Pump again to import the exported data into a prescribed database service and simply validate the imported data.

The downtime needed depends on the amount and type of data, the degree of parallelism, and the network bandwidth.

If you are migrating to Oracle Autonomous Database, then you’d copy the dump files to OCI Object Storage instead of the local database machine. You can simply use the web-based cloud console or the OCI command-line interface oci cli to put the dump files on Object Storage.

It is highly recommended to run Oracle Autonomous Database Schema Advisor (Doc ID 2462677.1) before the migration to analyze the database schemas on-premise and report any concerns that may arise during the migration. 

It is also possible to avoid any kind of intermediate storage and just copy the data over the network and insert them into the target database by using the network mode. This is applicable for both DBCS and Autonomous Database.

For detailed steps and code examples have a look at:

4. Data Pump Full Transportable

You can use the Data Pump full transportable method to achieve a faster migration for larger databases. In this case, the tablespaces need to be set in Read-Only mode. Source and target must have the same endian formats and compatible character sets.

The migration steps consist of:

  1. Set the tablespaces on-premise in Read-Only mode.
  2. Invoke Data Pump export with FULL=y and TRANSPORTABLE=always parameter.
  3. Copy the generated dump files from the previous step along with the original database data files to the target system in the cloud.
  4. Invoke Data Pump import with the TRANSPORT_DATAFILES parameter.

The Transportable method is generally much faster than a conventional export and import of the same data because the data files containing all of the actual data are simply copied to the destination system over the network. You use Data Pump to transfer only the metadata of the tablespace objects to the new database.

For detailed steps and code examples have a look at:

5. Data Pump Transportable Tablespace

Data Pump also provides the flexibility to transport only specific tablespaces instead of the full database.

The migration steps consist of:

  1. Set the tablespaces on-premise in Read-Only mode.
  2. Invoke Data Pump export with the TRANSPORT_TABLESPACES parameter.
  3. Copy the generated dump files from the previous step along with the original database data files to the target system in the cloud.
  4. Create the schemas on the target database.
  5. Invoke Data Pump import with the TRANSPORT_DATAFILES parameter.
  6. Set the tablespaces on the target database in Read-Write mode.

For a tablespace to be transportable it must be self-contained. This can be checked using the DBMS_TTS.TRANSPORT_SET_CHECK procedure. The TRANSPORT_SET_VIOLATIONS view is used to check for any violations.

For detailed steps and code examples have a look at:

6. Data Pump Convert Full Transportable

When you perform a transportable operation, and the source and the target platform are of different endianness, you must convert the data being transported to the target platform format. You can use the GET_FILE or PUT_FILE procedure of the DBMS_FILE_TRANSFER package to convert data between platforms during a data file transfer.

Execute the same steps as for Full Transportable or Transportable Tablespace adding one step in between to convert the data files.

Pros and cons for using DBMS_FILE_TRANSFER:

  • Pro: you don’t need any additional disk space for the conversion as you do when using RMAN convert.
  • Con: the tablespaces must be read-only during the entire copy operation, which might be an issue for huge databases. In this case consider using cross-platform transportable tablespaces with incremental backups.

Conclusion

Data Pump is very simple to use. It is designed to offer maximum flexibility of slicing and dicing data movement between different databases using encryption, parallelism, and compression.

The only thing to consider when using Data Pump is that it requires application downtime. The time needed to migrate the data depends mainly on the amount of data and your network bandwidth. You can speed up the process by using parallelism. If you can afford that downtime, go for Data Pump. It is very simple and straightforward.

As a logical migration method, it can be used to migrate your databases to both Oracle Database Cloud Service (DBCS) and Oracle Autonomous Database (ADB).

Further Reading

Would you like to get notified when the next post is published?