
Introduction
Oracle Autonomous Database is a fully managed service. Simply specify the CPU count and the amount of storage needed and you are ready to go. Scaling up and down CPU and storage is an online operation without any service interruption. Additionally, you can enable CPU Auto Scaling. Traditional storage management operations like creating and extending tablespaces and datafiles belong to the past. You just specify the amount of storage needed, load your data, and run your application.
The Autonomous Database provides you a Service Console to monitor real-time and historical information about the utilization of the service, including used and allocated storage.
When your database size grows over time, you might need to know which applications and users are consuming the vast amount of data. It might be historical data that can be transferred to a more cost-effective external storage, data in non-production environments that is not needed anymore, or as we will see, just some dump files that you can happily delete and free space for your application data.
In this blog post, we will see where and what SQL statements to be used to check your used and allocated storage in Autonomous Database, and how to free up the unneeded storage.
In case you are using the Autonomous Database Always Free service, which provides you 20GB of free storage, then this topic will be of huge interest to you.
The Environment
- Oracle Autonomous Database on shared Exadata infrastructure on Oracle Cloud using 1TB of storage.
Used and Allocated Storage on the Service Console
From your Autonomous Database details page, click on Service Console. A new tab will be opened showing you an overview of your used and allocated storage along with further information about CPU utilization and SQL statements.

In this example, we have around 616 GB of used and 673 GB of allocated storage. But who is consuming this?
Used Storage
As a DBA, the first thing you’d look at is the DBA_SEGMENTS view, checking the storage used per user:
SQL> select owner, sum(bytes)/1024/1024/1024 as size_gb from dba_segments group by owner order by size_gb desc;
OWNER SIZE_GB
------------------------------ ----------
SYS 613.379089
SSB 162.418884
ADMIN 1.52227783
APEX_210100 .717346191
AUDSYS .077087402
MDSYS .071044922
SH .005615234
...
SQL> select sum(bytes)/1024/1024/1024 as size_gb from dba_segments;
SIZE_GB
----------
779.241882
While looking at these results, three things might surprise you:
- Why am I allocating 162 GB for SSB which I don’t need at all?
- Why is the total storage over 779 GB, much more than what is shown in the Service Console?
- Why is the SYS schema so huge in comparison to application data?
So let’s check and see what is going on.
Sample Schemas
When I came across this the first time, the first thing I tried was to drop the SSB (Star Schema Benchmark) and SH (Sales History) schemas:
SQL> drop user ssb cascade;
drop user ssb cascade
*
ERROR at line 1:
ORA-01031: insufficient privileges
Ups! Why is that??? Let’s check the tablespace where do they reside:
SQL> select distinct tablespace_name from dba_segments where owner in ('SSB', 'SH');
TABLESPACE_NAME
------------------------------
SAMPLESCHEMA
SQL> select distinct owner from dba_segments where tablespace_name = 'SAMPLESCHEMA';
OWNER
--------------------------------------------------------------------------------
SSB
SH
SQL> select status from dba_tablespaces where tablespace_name = 'SAMPLESCHEMA';
STATUS
---------
READ ONLY
So the tablespace is read-only and the schemas cannot be dropped.
The Autonomous Database provides the SSB and SH schemas with their well-known large sample of data set for you to test the performance of your service.
The good news: the storage of the sample SSB and SH schemas does NOT count towards your database storage. This also explains why the total storage shown in DBA_SEGMENTS exceeds the values on the Service Console.
Storage on the File System
As Autonomous Database is a fully managed service, you don’t have (and don’t need) access to the underlying file system. Not a DIRECT access at least. However, with Autonomous Database, you are able to create DIRECTORY objects, and the Autonomous Database service will create the corresponding file system directories for you.
Data Pump export files, when not using direct export to Object Storage, will be stored on that file system. Additionally, you can copy your files from Object Storage into that directory by using the DBMS_CLOUD.GET_OBJECT PL/SQL procedure.
This file storage is allocated for you from the file system of the Exadata Infrastructure where the Autonomous Database is running. Hence, it counts towards your database storage. So, how to check the used storage on that file system without having direct access to it?
First, check what user-defined directories exist in the Autonomous Database:
SQL> select directory_name from dba_directories where origin_con_id != 1;
DIRECTORY_NAME
--------------------------------------------------------------------------------
DATA_PUMP_DIR
SQL_TCB_DIR
Then, use the DBMS_CLOUD.LIST_FILES function to list the files and calculate the storage used in each directory:
SQL> select object_name, bytes/1024/1024/1024 as size_gb from DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR') order by size_gb desc;
OBJECT_NAME SIZE_GB
------------------------------ ----------
exp_01.dmp 9.765625
exp_02.dmp 9.765625
....
SQL> with cte as
(
select sum(bytes)/1024/1024/1024 as size_gb from DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR')
union all
select sum(bytes)/1024/1024/1024 as size_gb from DBMS_CLOUD.LIST_FILES('SQL_TCB_DIR')
)
select sum(size_gb) as files_size_gb from cte;
FILES_SIZE_GB
-------------
590.942191
Actually, it’s much easier to calculate that storage, as it’s already accounted for as the tablespace DBFS_DATA:
SQL> select sum(bytes)/1024/1024/1024 as DBFS_DATA_used_gb from dba_segments where tablespace_name = 'DBFS_DATA';
DBFS_DATA_USED_GB
-----------------
607.316528
Apparently, this is a bit higher than the actual files size.
Now, let’s check to what schema do these segments belong:
SQL> select distinct owner from dba_segments where tablespace_name = 'DBFS_DATA';
OWNER
---------
SYS
AHA! This is why we have a huge SYS schema!
With this information in mind, now we can calculate the storage used and allocated easily.
Storage Used
To calculate the storage used, we run our SQL statement against the DBA_SEGMENTS view again, however, now excluding the SSB and SH schemas, or even simpler, excluding the SAMPLESCHEMA tablespace:
SQL> select sum(bytes)/1024/1024/1024 as USED_TOTAL_GB from dba_segments where owner not in ('SSB', 'SH');
USED_TOTAL_GB
-------------
616.817261
SQL> select sum(bytes)/1024/1024/1024 as USED_TOTAL_GB from dba_segments where tablespace_name != 'SAMPLESCHEMA';
USED_TOTAL_GB
-------------
616.817261
Storage Allocated
This time we query the DBA_DATA_FILES and DBA_TEMP_FILES view, excluding the SAMPLESCHEMA tablespace again:
SQL> with cte as
(
select sum(bytes)/1024/1024/1024 as size_gb from dba_data_files where tablespace_name != 'SAMPLESCHEMA'
union all
select sum(bytes)/1024/1024/1024 as size_gb from dba_temp_files
)
select sum(size_gb) as allocated_total_gb from cte;
ALLOCATED_TOTAL_GB
------------------
672.555458
Even though it’s very unlikely that the sum of bytes of temp and data are exactly the same, I’m using UNION ALL instead of UNION to return all rows.
Free Up Storage
Extents allocated but not used in a data file will automatically be used when new data is stored in the database. Storage management is handled automatically by the database. So, usually, you don’t care about these tasks anymore, especially when you are using an Autonomous Database.
However, if you just inserted a huge amount of data for a test in a non-production environment, and want to free up space yourself, this is still possible by resizing the data files:
SQL> alter database datafile '+DATA/EIG1POD/C0217826A260391CE0531914000AB802/DATAFILE/data.3918.1084374555' resize 50G;
Database altered.
SQL> alter database tempfile '+DATA/EIG1POD/C0217826A260391CE0531914000AB802/TEMPFILE/temp.599.1084374563' resize 2G;
Database altered.
What definitely makes sense is to delete files from the file system when not needed anymore. The DBMS_CLOUD PL/SQL package provides you the DELETE_FILE procedure for this purpose:
SQL> BEGIN
DBMS_CLOUD.DELETE_FILE('DATA_PUMP_DIR', 'exp_01.dmp');
END;
/
PL/SQL procedure successfully completed.
Conclusion
Oracle Autonomous Database allocates and manages the database storage automatically. The only thing you need is to specify the amount of storage needed and you are ready to develop your application. The amount of storage allocated and used can be monitored in the Service Console.
Additionally, if you need more insights about who is consuming this storage, you are able to query the corresponding database views the same way as in any Oracle Database.
Essentially, you have to keep two things in mind:
- The sample SSB and SH schemas, stored in the SAMPLESCHEMA tablespace, do NOT count towards your database storage.
- The file system storage accounted for as a DBFS_DATA tablespace, does count towards your database storage.
Don’t worry much about the storage allocation in the DATA tablespace. However, free up the storage on the file system when not needed.
Further Reading
- Use your Web Browser to import Data Pump dump files into Oracle Autonomous Database
- How to Import your Data into Oracle Autonomous Database – APEX Service
- What should I consider when migrating to Autonomous Database using Data Pump?
- Save Storage Cost with Hybrid Partitioned Tables in Oracle Autonomous Database
- Am I Autonomous or non-Autonomous, that is the Question