Introduction

Extracting DDL is sometime useful for creating similar objects in another database without data. Basically everything can be extracted from a running Oracle database.

The needs

My customer asked me to replicate a database without any data. The goal is to feed development environments running on Docker, so with a minimal footprint. The precise needs were:

  • All the metadata
  • Data from some tables may be needed (based on a provided list)
  • Imported users should be filtered (based on criteria and an exclude list)
  • Imported users will be created with basic password (password = username)
  • Source and target databases are not on the same network (thus no direct communication between both instances)
  • Logon triggers must be disabled on target database
  • Audit rules must also be disabled

Additional criteria may be added later.

How to proceed?

An Oracle package is dedicated to DDL extraction: dbms_metadata.get_ddl. If it’s very convenient for few objects, it does not do the job for a complete DDL extraction.

Since years now, datapump is also able to do this extraction. Actually, it was already possible with older exp/imp on 9i and older versions.

Export could be done as a normal expdp with metadata-only extraction. Then impdp will be used with the sqlfile directive. Datapump import with sqlfile directive won’t import anything but will parse the dumpfile and generate a SQL script from it.

SQL script will then be parsed and several actions will be done:

  • password change for all users (reset to username)
  • logon trigger disabling
  • audit rules disabling

Once done, SQL script will then be ready to send to target server.

Another expdp will be done with selected tables, this is for parameter tables for example and for sure tables without any sensible data. It is based on a text file (with the list of the tables to export) as input.

Before creating the metadata on target database, tablespaces must exist but with minimal sizes. This is why a script is also used to generate tablespace creation using a single datafile with minimum size and autoextend. Data size will be low, so users may not be annoyed by space exhaust on tablespaces.

Prerequisites

These are the prerequisites to use these scripts:

  • 12c or later source database (should also work with older versions)
  • target database in same or higher version and configured for OMF (db_create_file_dest)
  • connection to oracle user on both systems (or a user in the dba system group)
  • 1+GB free space on source and target server
  • nfs share between the 2 servers is recommended
  • users list for exclusion is provided by the customer
  • tables list for inclusion is provided by the customer

Here is an example of both lists:

cat ddl_tmp/excl_users.txt | more
ABELHATR
ACALENTI
ACTIVITY_R
ADELANUT
ALOMMET
AMERAN
AOLESG
APEREAN
APP_CON_MGR
...

cat ddl_tmp/incl_tables.txt
DERAN.TRAD_GV_CR
DERAN.TRAD_GV_PS
APPCN.PARAM_BASE
APPCN.PARAM_EXTENDED
OAPPLE.MCUST_INVC
...

Output files

The script will generate 3 files prefixed with the step number for identifying sequence on target database:

  • 01_${ORACLE_SID}_create_tablespace.sql: tablespace creation script using OMF
  • 02_${ORACLE_SID}_create_ddl.sql: main SQL script to create the DDL
  • 03_impdp_${ORACLE_SID}_tables.sh: import shell script for importing tables with data

Complete script explained

The first part of the script is for defining variables, variables are basically source database SID, working folder and file names:

# Set source database
export ORACLE_SID=MARCP01

# Set environment variables, main folder and file names
export DDL_TARGET_DIR=/home/oracle/ddl_tmp
export DDL_TARGET_DUMPFILE=ddl_${ORACLE_SID}_`date +"%Y%m%d_%H%M"`.dmp
export DDL_TARGET_LOGFILE_EXP=ddl_${ORACLE_SID}_exp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_LOGFILE_IMP=ddl_${ORACLE_SID}_imp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_TABLES_DUMPFILE=tables_${ORACLE_SID}_`date +"%Y%m%d_%H%M"`_%U.dmp
export DDL_TARGET_TABLES_LOGFILE_EXP=tables_${ORACLE_SID}_exp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_SCRIPT=ddl_${ORACLE_SID}_extracted_`date +"%Y%m%d_%H%M"`.sql
export DDL_TBS_SCRIPT=01_${ORACLE_SID}_create_tablespace.sql
export DDL_CREATE_SCRIPT=02_${ORACLE_SID}_create_ddl.sql
export DDL_IMPORT_TABLES_CMD=03_impdp_${ORACLE_SID}_tables.sh
export DDL_EXCLUDE_USER_LIST=excl_users.txt
export DDL_INCLUDE_TABLE_LIST=incl_tables.txt

Second part is for creating target folder and deleting temporary files from the hypothetical last run:

# Create target directory and clean up the folder
# Directory should include a user list to exclude: $DDL_EXCLUDE_USER_LIST
#  => User list is basically 1 username per line
# Directory may include a table list to include: $DDL_INCLUDE_TABLE_LIST
#  => Table list is 1 table per line, prefixed with the username (owner)
mkdir $DDL_TARGET_DIR 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.par 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.par 2>/dev/null
rm $DDL_TARGET_DIR/0*.sql 2>/dev/null
rm $DDL_TARGET_DIR/0*.sh 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.dmp 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.dmp 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.log 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.log 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.sql 2>/dev/null

A parameter file will be used for the first expdp, it must be created before. All users will be included, but not the default’s one. Excluding unneeded users will be done later:

# Create parameter file for metadata export
# No need to parallelize as DDL extraction runs on a single thread
. oraenv <<< $ORACLE_SID
sqlplus -s / as sysdba <<EOF
 create or replace directory DDL_TARGET_DIR as '$DDL_TARGET_DIR';
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/ddl_extract.par
 SELECT 'dumpfile=$DDL_TARGET_DUMPFILE' FROM DUAL; 
 SELECT 'logfile=$DDL_TARGET_LOGFILE_EXP' FROM DUAL;
 SELECT 'directory=DDL_TARGET_DIR' FROM DUAL; 
 SELECT 'content=metadata_only' FROM DUAL;
 SELECT 'cluster=N' FROM DUAL;
 SELECT 'exclude=fga_policy' FROM DUAL;
 SELECT 'exclude=AUDIT_OBJ' FROM DUAL;
 SELECT 'exclude=DB_LINK' FROM DUAL;
 SELECT 'schemas='||username FROM DBA_USERS WHERE oracle_maintained='N' ORDER BY username;
 spool off;
exit;
EOF

In this parameter file, let’s exclude the users from the txt file:

# Exclude users' list from parameter file
cp $DDL_TARGET_DIR/ddl_extract.par $DDL_TARGET_DIR/par1.tmp
for a in `cat $DDL_TARGET_DIR/$DDL_EXCLUDE_USER_LIST`; do cat $DDL_TARGET_DIR/par1.tmp | grep -v $a > $DDL_TARGET_DIR/par2.tmp; mv $DDL_TARGET_DIR/par2.tmp $DDL_TARGET_DIR/par1.tmp; done
mv $DDL_TARGET_DIR/par1.tmp $DDL_TARGET_DIR/ddl_extract.par

A parameter file is also needed for tables expdp. Tables’ export will be included in separate dump files:

# Create parameter file for tables to include
# Tables will be consistent at the same SCN
# Export is done with parallel degree 4
sqlplus -s / as sysdba <<EOF
 create or replace directory DDL_TARGET_DIR as '$DDL_TARGET_DIR';
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/tables_extract.par
 SELECT 'dumpfile=$DDL_TARGET_TABLES_DUMPFILE' FROM DUAL; 
 SELECT 'logfile=$DDL_TARGET_TABLES_LOGFILE_EXP' FROM DUAL;
 SELECT 'directory=DDL_TARGET_DIR' FROM DUAL; 
 SELECT 'parallel=4' FROM DUAL;
 SELECT 'cluster=N' FROM DUAL;
 SELECT 'flashback_scn='||current_scn FROM V$DATABASE;
 spool off;
exit;
EOF

In this parameter file, let’s include all the tables as described in the related txt file:

# Include tables' list to parameter file
for a in `cat $DDL_TARGET_DIR/$DDL_INCLUDE_TABLE_LIST`; do echo "tables="$a >> $DDL_TARGET_DIR/tables_extract.par; done

Now metadata export could start:

# Export metadata to a dump file
expdp "/ as sysdba" parfile=$DDL_TARGET_DIR/ddl_extract.par

# Output example
# ...
# Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:
#   /home/oracle/ddl_tmp/ddl_MARCP01_20220318_1351.dmp
# Job "SYS"."SYS_EXPORT_SCHEMA_02" successfully completed at Fri Mar 18 14:21:47 2022 elapsed 0 00:24:13

And tables could also be exported now:

# Export included tables in another set of dump files
expdp "/ as sysdba" parfile=$DDL_TARGET_DIR/tables_extract.par

# Output example
# ...
# Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_01.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_02.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_03.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_04.dmp
# Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Fri Mar 18 14:22:08 2022 elapsed 0 00:00:14

A script is needed for tablespace creation, let’s create it:

# Create tablespace script for tablespace creation on target database (10MB with autoextend)
sqlplus -s / as sysdba <<EOF
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/$DDL_TBS_SCRIPT
  SELECT 'create tablespace '||tablespace_name||' datafile size 10M autoextend on;' FROM dba_data_files WHERE tablespace_name NOT IN ('SYSTEM','SYSAUX') and tablespace_name NOT LIKE 'UNDOTBS%' group by tablespace_name order by tablespace_name;
  spool off;
exit;
EOF

Another parameter file is needed for doing the datapump import that will create the SQL file:

# Create parameter file for metadata import as an SQL file
echo "dumpfile=$DDL_TARGET_DUMPFILE" > $DDL_TARGET_DIR/ddl_generate.par
echo "logfile=$DDL_TARGET_LOGFILE_IMP" >> $DDL_TARGET_DIR/ddl_generate.par
echo "directory=DDL_TARGET_DIR" >> $DDL_TARGET_DIR/ddl_generate.par
echo "sqlfile=$DDL_TARGET_SCRIPT" >> $DDL_TARGET_DIR/ddl_generate.par

Now let’s start the impdp task to extract DDL from the metadata dumpfile:

# Generate SQL script from previous dump (with impdp - it will not import anything)
impdp "/ as sysdba" parfile=$DDL_TARGET_DIR/ddl_generate.par

# Output example
# ...
# Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Fri Mar 18 14:34:48 2022 elapsed 0 00:08:37

Once the SQL script with all DDL has been created, it’s time to change users’ passwords, lock some specific users, and disable logon triggers. You may probably need different changes:

# Define standard password for all internal users and generate DDL script (not definitive's one)
cat $DDL_TARGET_DIR/$DDL_TARGET_SCRIPT | awk -F ' ' '{if ($1 == "CREATE" && $2 == "USER" && $6 == "VALUES")  print $1" "$2" "$3" "$4" "$5" "$3; else print $0}' > $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT

# Lock *_MANAGER users (lock is added at the end of DDL script)
cp $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT $DDL_TARGET_DIR/ddl.tmp
cat $DDL_TARGET_DIR/ddl.tmp | grep "CREATE USER "" | grep "_MANAGER"" | awk -F ' ' '{print "ALTER USER "$3" ACCOUNT LOCK;"}' >> $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
rm $DDL_TARGET_DIR/ddl.tmp

# Remove logon triggers (disabled at the end of DDL script)
cp $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT $DDL_TARGET_DIR/ddl.tmp
cat $DDL_TARGET_DIR/ddl.tmp | awk -F ' ' '{if ($1 == "CREATE" && $2 == "EDITIONABLE" && $3 == "TRIGGER")  {trig=1; trigname=$4;} else if (trig == 1 && $1  == "after" && $2 == "logon") {trig=0 ; print "ALTER TRIGGER "trigname" DISABLE;"}}' >> $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
rm $DDL_TARGET_DIR/ddl.tmp

I’m still on the source server, but it does not prevent me to generate parameter and command file for impdp:

# Create parameter file for tables import (will be needed on target server)
echo "dumpfile=$DDL_TARGET_TABLES_DUMPFILE" > $DDL_TARGET_DIR/tables_import.par
echo "logfile=tables_import.log" >> $DDL_TARGET_DIR/tables_import.par
echo "directory=DDL_TARGET_DIR" >> $DDL_TARGET_DIR/tables_import.par

# Script for importing tables on the target database (on the target server)
echo 'impdp "/ as sysdba" parfile=$DDL_TARGET_DIR/tables_import.par' > $DDL_TARGET_DIR/$DDL_IMPORT_TABLES_CMD

Last operation done on this source server is displaying the files generated by the script:

# Display files to transport to target server
ls -lrth $DDL_TARGET_DIR/0*.* $DDL_TARGET_DIR/tables*.dmp $DDL_TARGET_DIR/tables_import.par | sort
# Output example
# -rw-r----- 1 oracle asmadmin  20K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_02.dmp
# -rw-r----- 1 oracle asmadmin 472K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_01.dmp
# -rw-r----- 1 oracle asmadmin 8.0K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_03.dmp
# -rw-r----- 1 oracle asmadmin 8.0K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_04.dmp
# -rw-rw-r-- 1 oracle oinstall  29K Mar 18 14:25 /home/oracle/ddl_tmp/01_MARCP01_create_tablespace.sql
# -rw-rw-r-- 1 oracle oinstall  63M Mar 18 14:35 /home/oracle/ddl_tmp/02_MARCP01_create_ddl.sql
# -rw-rw-r-- 1 oracle oinstall   64 Mar 18 14:35 /home/oracle/ddl_tmp/03_impdp_MARC01_tables.sh
# -rw-rw-r-- 1 oracle oinstall   98 Mar 18 14:35 /home/oracle/ddl_tmp/tables_import.par

Conclusion

This is not high level Oracle database stuff, but you can achieve nice automation simply using command shell and dynamic SQL scripting. It does not require any extra tool, and in this example, it brought to my customer exactly what he needs.