
Oracle Data Pump enables you to move data and metadata from one database to another, providing an extensive range of benefits. The followings are tips and best practices to consider:
1. Run the Cloud Premigration Advisor Tool (CPAT)
First of all, check the database schemas you want to export from the source database using the Cloud Premigration Advisor Tool (CPAT) as described in Doc ID 2758371.1.
# download and unzip p32613591_112048_Generic.zip
./premigration.sh --connectstring jdbc:oracle:oci:@ --sysdba --targetcloud ATPS --pdbname PDB1 --schemas SAMPLEUSER --reportformat [json | text]
2. Character Set
If your source database uses a single-byte character set (Autonomous Database Advisor will recognize this), then check Why do I get “ORA-12899 value too large for column” when migrating to Autonomous Database?
3. SYSDATE and SYSTIMESTAMP
Oracle Autonomous Database servers are configured to the UTC time zone. As there is no access to the underlying operating system, this configuration cannot be changed. If your application use SYSDATE or SYSTIMESTAMP SQL functions and expect the time in the local time zone, you’ll need to use CURRENT_DATE and CURRENT_TIMESTAMP instead (unless your clients are in the UTC time zone).
To change this behavior, check the SYSDATE_AT_DBTIMEZONE session parameter.
4. Restricted and Removed Database Features
Check the list of restricted and removed features in Autonomous Database on Shared Infrastructure and whether you are using any of them, e.g. Java in DB or Oracle Multimedia.
5. Defaults in Autonomous Database
- Database Version: Autonomous Database is always provisioned using the newest database version, currently 19c. You cannot run Oracle Database on older versions.
- Tablespace Name: The default tablespace is named DATA. It cannot be changed. You cannot add additional tablespaces. So, your application should not relay on a specific tablespace name.
- Service Name: Autonomous Database use default service names (_low, _medium, _high, _tp, _tpurgent). No additional services can be created. So, your application should not relay on a specific service name.
6. RAC Ready?
Autonomous Database always runs on Oracle RAC, which provides Database High Availability and Scalability. When you use more than 16 OCPUs, your database services might be open on more than one node.
- Connections: In case of a local failure or Rolling Patching, the database service will relocate from one into another node, and the question is, how will your application handle connections? To automatically reconnect and transparently recover in-flight transactions, use Connection Pools and (Transparent) Application Continuity.
- DBMS_PIPE: if the application is using DBMS_PIPE, you should use Advanced Queuing (AQ) instead.
- Sequences: Use non-ordered and cached sequences to generate primary keys. Also, use Scalable Sequences that were introduced in version 18c.
7. Use Schema Based Export
To reduce the amount of data being exported and transferred, export only the schemas you want to migrate instead of the full database, including SYS objects and other stuff you don’t need.
schemas=schema1,schema2,...
8. Ensure Consistent Exports
Ensure that your export operation is performed with consistent data using a specific SCN or timestamp. This will increase the UNDO requirements for the duration of the export.
flashback_scn=... OR flashback_time=...
9. Exclude unnecessary Database Objects
Oracle recommends excluding specific object types while exporting the data. It is also recommended to exclude schema statistics during export and recreate fresh ones after import.
exclude=db_link,cluster
-- for Autonomous Data Warehouse, additionally
exclude=index,indextype,materialized_view,materialized_view_log,materialized_zonemap
10. Increase Export Performance
For faster exports, consider using large UNDO and TEMP tablespaces, increasing the size of pga_aggregate_target, and making sure to have current dictionary statistics.
SQL> exec DBMS_STATS.GATHER_SCHEMA_STATS('SYS');
SQL> exec DBMS_STATS.GATHER_SCHEMA_STATS('SYSTEM');
For optimal Data Pump performance, it is recommended that you set STREAMS_POOL_SIZE to the value returned by the result set of the following query:
select 'ALTER SYSTEM SET STREAMS_POOL_SIZE='||(max(to_number(trim(c.ksppstvl)))+67108864)||' SCOPE=SPFILE;'
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c
where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in ('__streams_pool_size', 'streams_pool_size');
11. Use Parallelism, Compression, and Encryption
If you are a lucky user of an Enterprise Edition, use parallelism to accelerate the export and import processes, especially for larger databases. Export your schemas into multiple Data Pump files using the %U substitution variable. In case you are even luckier and have the Advanced Compression and Advanced Security options, consider compressing and encrypting your export files.
dumpfile=export_%U.dmp
parallel=<number_of_OCPUs>
compression=ALL
compression_algorithm=MEDIUM
encryption=ALL
encryption_pwd_prompt=yes
Oracle grants restricted-use licenses for Advanced Compression and Advanced Security when performing the migration via Zero Downtime Migration (ZDM) or OCI Database Migration Service (DMS).
Higher compression means using more CPU resources but getting a higher compression ratio, i.e., smaller export files to push through the network. It’s recommended to use MEDIUM. Use HIGH only of you have a special purpose, as it is very CPU intensive.
For the best import performance use the HIGH autonomous database service for your import
connection and set the PARALLEL parameter to at least the number of OCPUs you have in your Autonomous database.
12. Generate detailed Loggings and Metrics
Set LOGTIME to ALL to specify that messages displayed during export and import operations are assigned with a timestamp. Setting METRICS to YES adds additional information about the job reported to the Data Pump log file.
logtime=all
metrics=yes
13. Estimate the Space Needed for your Export
Especially for large databases, make sure you have sufficient storage for your export files. Estimate the space needed first based on database blocks or statistics, without actually performing the export operation.
estimate=[BLOCKS | STATISTICS]
estimate_only=yes
14. Use Parameter File
Use a parameter file for export and import to avoid issues resulting from typing a long command line on the shell. Put all parameters in a text file, one per line, and execute expdp and impdp with the parfile parameter.
expdp parfile=my_export_parfile.par
impdp parfile=my_import_parfile.par
15. Do NOT Start Export as SYSDBA
For this, I’ll just put a screenshot from the doc site:

16. Use Multipart Upload to Object Storage
Multipart uploads allow uploading a large object (the export dump file) as individual parts in parallel to reduce the amount of time you spend uploading to Object Storage. Oracle recommends performing a multipart upload for objects larger than 100MB. The maximum size for an uploaded object is 10TB. You can control the maximum file size in data pump using the FILESIZE parameter.
FILESIZE=integer[B | KB | MB | GB | TB]
17. Check the Integrity of Uploaded Files to Object Storage
To ensure that no bit was lost in transfer, use checksum on your source system and Object Storage. On source:
md5sum export.dmp
You’ll get something like “136bb5a59561ae09a693aa3a8facac3d”.
In the Autonomous database run (adjust the values for your region and tenancy):
SQL> SELECT OBJECT_NAME, BYTES, CHECKSUM
FROM DBMS_CLOUD.LIST_OBJECTS('OBJ_STORE_CRED', 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/yournamespace/b/yourbucketname/');
the CHECKSUM column should display the same value as on-premises “136bb5a59561ae09a693aa3a8facac3d”.
18. Use Automation Tools
Automate your migration process to reduce human error:
Summary
Finally, we end up with this data pump configuration for export:
parallel=<number_of_OCPUs>
compression=ALL
compression_algorithm=MEDIUM
encryption=ALL
encryption_pwd_prompt=yes
flashback_scn=... OR flashback_time=...
filesize=integer[B | KB | MB | GB | TB]
exclude=db_link,cluster
schemas=yourschemaname
dumpfile=export_%U.dmp
logfile=export.log
directory=data_pump_dir
logtime=all
metrics=yes
-- for Autonomous Data Warehouse, additionally
data_options=group_partition_table_data
exclude=index,indextype,materialized_view,materialized_view_log,materialized_zonemap
-- for estimation only
estimate=[BLOCKS | STATISTICS]
estimate_only=yes
For import:
parallel=<number_of_OCPUs>
encryption_pwd_prompt=yes
exclude=db_link,cluster
schemas=yourschemaname
dumpfile=https://objectstorage...export_%U.dmp
logfile=import.log
directory=data_pump_dir
credential=def_cred_name
remap_tablespace=%:DATA
transform=segment_attributes:n
transform=dwcs_cvt_iots:y
transform=constraint_use_default_index:y
logtime=all
metrics=yes
-- for Autonomous Data Warehouse, additionally
partition_options=merge
exclude=index,indextype,materialized_view,materialized_view_log,materialized_zonemap