Infrastructure at your Service

Franck Pachot

PDB Snapshot using dbms_dnfs.clonedb_renamefile

By Franck Pachot

In the previous post I’ve explained how to use ‘snapshot copy’ PDB creation for thin provisioning and quick restore of initial state for continuous integration tests. If you don’t have the multitenant option, you need a remote CDB to do it. Here is a way to do the same on only one single-tenant CDB.

The idea here is to use dbms_dnfs.clonedb_renamefile thatblog-thin-arch-121.txt can do snapshots using sparse files.


This is required to do thin clones in a regular filesystem:

18:31:09 SQL> show parameter clonedb
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
clonedb                              boolean     TRUE


I’ve a PDB with the state I want to snapshot. I close it and unplug it in order to be able to plug it later. My idea here is to be able to do that in single tenant, so with only one PDB in the CDB.

18:31:09 SQL> alter pluggable database PDB close;
Pluggable database altered.

At that point I thought to unplug that, then plug-in and clone. The idea was to have nothing to copy: the clone creates the sparse file without touching the original file, so it can be used later to plug it again. However, the plug-in operation modifies the file headers which means that they cannot be plugged-in again with the same unplug xml file. Currently, I don’t know how to plug-in a read-only set of datafiles.


So, finally I do the clone before unplugging.

18:32:04 SQL> exec dbms_dnfs.clonedb_renamefile('/u02/app/oracle/oradata/CDB/PDB/system01.dbf','/u02/app/oracle/oradata/CDB/PDB/system01.dbf.cow');
PL/SQL procedure successfully completed.
18:32:04 SQL> exec dbms_dnfs.clonedb_renamefile('/u02/app/oracle/oradata/CDB/PDB/sysaux01.dbf','/u02/app/oracle/oradata/CDB/PDB/sysaux01.dbf.cow');
PL/SQL procedure successfully completed.
18:32:04 SQL> exec dbms_dnfs.clonedb_renamefile('/u02/app/oracle/oradata/CDB/PDB/SAMPLE_SCHEMA_users01.dbf','/u02/app/oracle/oradata/CDB/PDB/SAMPLE_SCHEMA_users01.dbf.cow');
PL/SQL procedure successfully completed.
18:32:04 SQL> exec dbms_dnfs.clonedb_renamefile('/u02/app/oracle/oradata/CDB/PDB/example01.dbf','/u02/app/oracle/oradata/CDB/PDB/example01.dbf.cow');
PL/SQL procedure successfully completed.

This keeps the original datafiles read only and creates the sparse files (.cow) where only the modified blocks are copied.


Then the unplug creates the .xml file that references only those .cow ones.

18:33:20 SQL> alter pluggable database PDB unplug into '/tmp/pdbcow.xml';
Pluggable database altered.

Then as I want to be able to plug-in several times, I have to archive the files to be able to restore them. But here it’s only small files (the .cow ones) as the original ones are not modified anymore.

18:33:32 SQL> host tar -Pcvf /tmp/pdbcow.tar /u02/app/oracle/oradata/CDB/PDB/*.cow

and you can check that this tar file is small:

18:33:36 SQL> host du -k /tmp/pdbcow.tar
2179880 /tmp/pdbcow.tar

Now, I have everything to be able to plug-in several times the PDB in order to restore the original state. I can drop it:

18:33:37 SQL> drop pluggable database PDB;
Pluggable database dropped.


Now I can do the following as many as I want.

18:33:56 SQL> create pluggable database PDB using '/tmp/pdbcow.xml' nocopy;
Pluggable database created.

then use it:

18:34:03 SQL> alter session set container=PDB;
Session altered.
18:34:03 SQL> startup
Pluggable Database opened.
18:34:04 SQL> connect hr/[email protected]//localhost/PDB

Here I can run my test.
And when I’ve finished I can drop it:

18:34:04 SQL> connect / as sysdba
18:34:05 SQL> alter session set container=CDB$ROOT;
Session altered.
18:34:05 SQL> alter pluggable database PDB close;
Pluggable database altered.
18:34:06 SQL> drop pluggable database PDB including datafiles;
Pluggable database dropped.

I’ve dropped it ‘including datafiles’ because those (the .cow ones) cannot be used anymore. However the original files remain and have not changed.
So the only thing I have to do to start again is restore those small .cow files:

18:34:06 SQL> host tar -Pxvf /tmp/pdbcow.tar
18:34:14 SQL>

which is very fast, and it’s ready to start again with:

create pluggable database PDB using '/tmp/pdbcow.xml' nocopy;

as above. So you can restore the same saved state in a few seconds, without having to restore the full datafiles.


I show this only for the context of continuous integration testing, as an alternative to rebuild or restore or flashback the database. I would never do that in production because I don’t know exactly how it works. I mean, in order to work, you need to keep the original files. They are referenced in the controlfile so that the processes can read them when the block is not found in the sparse file, but I don’t know the lifecycle of that. The drop commands ignores them. If you trace controlfile to trace, you will not see them. And you don’t see them in v$datafile. So there is a high risk of losing them accidently. And when you drop all PDB that can reference them, they are still there. So how to clean that up?
Anyway, for a database build only for automatic tests, it’s a good alternative to snapshot copy if you don’t have multitenant option and don’t want to keep a remote CDB to hold the original datafiles.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Franck Pachot
Franck Pachot

Principal Consultant / Database Evangelist