How to Import your Data into Oracle Autonomous Database – APEX Service

Introduction

As you probably already know, Oracle APEX enables you to build applications with minimal coding. APEX is already available in all Oracle Autonomous Database workload types: transaction processing, data warehouse, and JSON. You get a fully-managed database where you can even more focus on your application development.

Additionally, Oracle provides an Autonomous Database specifically for APEX applications for a much lower cost.

The cost for the APEX service is $0.1613 per vCPU/hour instead of $0.67205 for transaction processing. This is 75% reduction in CPU price.

However, the lower cost comes with some limitations. One of the most important limitations to be highlighted here is the missing support for Oracle Database clients, this means, SQL*Net connectivity is not available. Hence, you are not able to connect to the database via Data Pump impdp client either! So how do you get your existing database schemas into the newly created database???

In this blog post, we are going to use the DBMS_DATAPUMP PL/SQL API to import data from Data Pump dump files into Oracle Autonomous Database – APEX Service.

Additionally, you’ll see the use cases when and how to use the following DBMS_CLOUD functions and procedures:

  • LIST_OBJECTS
  • LIST_FILES
  • PUT_OBJECT
  • DELETE_OBJECT
  • DELETE_FILE

The Environment

  • Oracle Autonomous Database – Shared Infrastructure APEX Service.
  • Data Pump dump file (sampleuser.dmp) already uploaded to Object Storage, which contains the schema (SAMPLEUSER) that needs to be imported into the Autonomous Database – APEX Service.
  • Object Storage namespace: oci_core_emea
  • Object Storage bucket name: dumps

Import Data using the DBMS_DATAPUMP PL/SQL API

Step 1: Connect to your Autonomous Database using Database Actions in the Cloud Console

From your Autonomous Database details page, select the Tools tab, then Open Database Actions. Provide your username (ADMIN) and password. Finally, click on SQL to open what was initially called SQL Developer Web.

Step 2: Allow your Autonomous Database to Access Object Storage

Follow step 3 in this blog to create your credentials that enable the Autonomous Database to access the dump file in the Object Storage. To verify the access, list the files in your Object Storage bucket using the LIST_OBJECTS function of the DBMS_CLOUD PL/SQL package:

select object_name
from DBMS_CLOUD.LIST_OBJECTS('OBJECT_STORE_CRED', 'https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/oci_core_emea/dumps/');

Step 3: Import your Data into the Autonomous Database – APEX Service

Use the DBMS_DATAPUMP PL/SQL API to import your schemas into the Autonomous Database:

DECLARE
	-- replace with your values
    exported_schema VARCHAR2(64)  := 'SAMPLEUSER';
    import_schema   VARCHAR2(64)  := 'SAMPLEUSER'; -- in case you want to remap schema
    data_pump_dir   VARCHAR2(64)  := 'DATA_PUMP_DIR';
	dump_file_name  VARCHAR2(256) := 'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/oci_core_emea/b/dumps/o/sampleuser.dmp';
    credential_name VARCHAR2(64)  := 'OBJECT_STORE_CRED';
    parallel        NUMBER        := 4;

    job_handle      NUMBER;
    job_name        VARCHAR2(64);
	job_status      VARCHAR2(128);
    output_message  VARCHAR2(1024);
BEGIN
    job_name := dbms_scheduler.generate_job_name('import_');
    job_handle := dbms_datapump.open(operation => 'IMPORT', job_mode => 'SCHEMA', job_name => job_name); 
    dbms_datapump.add_file(handle => job_handle, filename => dump_file_name, directory => credential_name, filetype => dbms_datapump.ku$_file_type_uridump_file); 
	dbms_datapump.add_file(handle => job_handle, filename => import_schema || '_import.log', directory => data_pump_dir, filetype => 3);
	dbms_datapump.metadata_remap(job_handle, 'REMAP_SCHEMA', exported_schema, import_schema);
	dbms_datapump.metadata_filter(handle => job_handle, name => 'SCHEMA_EXPR', value => 'IN(''' || exported_schema || ''')');
	dbms_datapump.set_parallel(handle => job_handle, degree => parallel);
    dbms_datapump.start_job(handle => job_handle, skip_current => 0, abort_step => 0); 
    dbms_datapump.wait_for_job(handle => job_handle, job_state => job_status);
    output_message := 'Data Pump Import Execution: ''' || job_status || '''';
    dbms_output.put_line(output_message);
END;
/

After successful execution, you’ll get an output similar to the following:

Data Pump Import Execution: 'COMPLETED'

PL/SQL procedure successfully completed.

Elapsed: 00:00:11.870

Step 4: Check the Data Pump Import Log File

List the files in the DATA_PUMP_DIR directory using the LIST_FILES function:

select * 
from DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR') 
order by last_modified desc;

Now you will see that the SAMPLEUSER_import.log file was created. As we don’t have access to the file system, we will need to copy the log file to Object Storage first using the PUT_OBJECT procedure, and then download it from there:

BEGIN
	DBMS_CLOUD.PUT_OBJECT(
		'OBJECT_STORE_CRED', 
		'https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/oci_core_emea/b/dumps/o/SAMPLEUSER_import.log', 
		'DATA_PUMP_DIR', 
		'SAMPLEUSER_import.log');
END;
/

PL/SQL procedure successfully completed.

Having a look at the log file:

Master table "ADMIN"."IMPORT_180" successfully loaded/unloaded
Starting "ADMIN"."IMPORT_180":  
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SAMPLEUSER"."SAMPLETABLE"                  5.062 KB       1 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "ADMIN"."IMPORT_180" successfully completed at Mon Aug 23 14:51:24 2021 elapsed 0 00:00:05

Step 5: Clean Up

Files in the DATA_PUMP_DIR directory counts towards your Autonomous Database storage usage. Even though the log file is usually not as big, clean up you files after the import is completed. Delete the log file from the DATA_PUMP_DIR directory using the DELETE_FILE procedure:

BEGIN
	DBMS_CLOUD.DELETE_FILE('DATA_PUMP_DIR', 'SAMPLEUSER_import.log');
END;
/

PL/SQL procedure successfully completed.

You also could delete the dump file from the Object Storage bucket as well using the DELETE_OBJECT procedure:

BEGIN
   DBMS_CLOUD.DELETE_OBJECT(
       credential_name => 'OBJECT_STORE_CRED',
       object_uri => 'https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com/v1/oci_core_emea/dumps/sampleuser.dmp');
/

PL/SQL procedure successfully completed.

Conclusion

Oracle Autonomous Database – APEX Service provides a fully managed database for your low-code application development at a very lower cost than the standard Autonomous Database service. As the APEX service does not provide SQL*Net connectivity via database clients, it is not possible to import your existing database schemas as usual by using the Data Pump impdp client. Instead, you connect to the SQL Action in the Cloud Console and use the DBMS_DATAPUMP PL/SQL API.

Additionally, we have seen when and how to use the LIST_OBJECTS, LIST_FILES, PUT_OBJECT, DELETE_OBJECT, and DELETE_FILE functions and procedures of the DBMS_CLOUD package.

By the way, Autonomous APEX is included in the Always Free resources. So just try it out for free, forever!

Further Reading

Would you like to get notified when the next post is published?