Infrastructure at your Service

David Hueber

ODA 12.1.X.X.X – add a multiplexed control file under ACFS

Since version 12, ODA stores databases on ACFS volumes instead of ASM directly. This slightly changed the way the files are managed and administer. This articles presents how to multiplex your control files on ACFS.

At first database creation, ODA set up 3 ACFS volumes to store all non-CDB databases (CDB are managed on dedicated volumes):

/dev/asm/datastore-386   62G  5.2G   57G   9% /u01/app/oracle/oradata/datastore
/dev/asm/datastore-286   1.7T  871G  836G  52% /u02/app/oracle/oradata/datastore
/dev/asm/datastore-360   2.2T  5.6G  2.2T   1% /u01/app/oracle/fast_recovery_area/datastore

These reside on the ASM disk group:

  • /u01 datastore on +REDO
  • /u02 on +DATA
  • Fast Recovery Area on +RECO

All files stored on that volumes are OMF based and have therefore Oracle generated file names.

By default any new database has the initialisation parameters db_create_file_dest and db_create_online_log_dest_1 configured as following:

SQL> show parameter create

NAME                                       TYPE     VALUE
------------------------------------ ----------- ------------------------------
create_bitmap_area_size            integer     8388608
db_create_file_dest                    string     /u02/app/oracle/oradata/datastore/.ACFS/snaps/DBPROD1
db_create_online_log_dest_1      string     /u01/app/oracle/oradata/datastore/DBPROD1
db_create_online_log_dest_2      string

As consequence all databases are created with a single control file located in the same ACFS volume than the redo logs.

SQL> show parameter control_files
NAME              TYPE     VALUE
---------------- --------- ------------------------------
control_files     string     /u01/app/oracle/oradata/datastore/DBPROD1/DBPROD1/controlfile/o1_mf_bj2r0f4n_.ctl

This volume is based on a triple mirrored ASM disk group, which already safeguard us from most possible issues except of course from user errors 🙄 or logical corruptions. Therefore if you still want to multiplex your control file, you will have manually to go through following steps:

  1. Generate a trace file from your control file
  2. Edit the trace file to keep only the NORESETLOGS version
  3. Configure the db_create_online_log_dest_2 initialisation parameter
  4. Reset the control_files parameter
  5. Shutdown your database
  6. Run the control file creation script


Preparate control file creation script

First of all generate the trace file

SQL> alter database backup controlfile to trace as '/tmp/ctl_DBPROD1.sql';

Database altered.

Then edit the generated file to keep only the NORESETLOGS version. It should look like something like this:

  GROUP 1 (
  ) SIZE 500M BLOCKSIZE 512,
EXECUTE SYS.DBMS_BACKUP_RESTORE.CFILESETSNAPSHOTNAME('/u01/app/oracle/fast_recovery_area/datastore/DBPROD1/snapcf_DBPROD1.f');

EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('SNAPSHOT CONTROLFILE NAME','TO ''/u01/app/oracle/fast_recovery_area/datastore/DBPROD1/snapcf_DBPROD1.f''');




Prepare database

Configure the parameter db_create_online_log_dest_2 in order to Oracle to generate a second control file are re-creation

SQL> alter system set db_create_online_log_dest_2='/u01/app/oracle/oradata/datastore/DBPROD1' scope=both;

System altered.

Then remove the parameter control_files from the spfile. If you forget that point, Oracle will ignore the db_create_online_log_dest_X parameter and still re-create only one control file.


SQL> alter system reset control_files scope=spfile;

System altered.

Re-create the control files

At that point the system is ready to re-create the control file and multiplexing it. Simply shutdown the database and run the created SQL script:

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> @ctl_DBPROD1.sql
ORACLE instance started.Total System Global Area 1.7180E+10 bytes
Fixed Size            3731384 bytes
Variable Size         2550136904 bytes
Database Buffers     1.4563E+10 bytes
Redo Buffers           63377408 bytesControl file created.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

ORA-00283: recovery session canceled due to errors
ORA-00264: no recovery requiredSystem altered.
Database altered.

The database is now OPEN READ WRITE again and has two control files:

SQL> select name from v$controlfile;NAME

Enjoy! 😀



    HI David,

    I am new for ODA & just started working on it.

    I need your help in restoring the backup of one database from ODA1 to ODA2 which together form a 2 node cluster but having a database in 1-Node RAC mode.

    Source Database “PDORCL” is running on ODA1 in 1-Node RAC.

    Destination Database is the one which i need to create from the backup of the source database but with different name as “ORCL”

    ODA1 to ODA2 are member of the 2-Node RAC.

    so as these 2 VMs/Servers are in cluster & sharing the ASM, if i want to restore the backup of “PDORCL” onto the same ASM with different directory structure, i need to create a different structure, which i have created & added that in pfile but while starting the database in nomount state to restore the backup, it’s not identifying path set for “db_create_file_dest” set to “/xxx/.ACFS/xxx/ “.

    could you please help me out with this.

    • David Hueber says:

      Hi Prafulla,

      thanks for following our blog.

      Doing a duplicate is not really a straight forward process, but I will try to wrap up to steps you should go through.
      So if I got you correctly, you want to duplicate the PDORCL database running on node 0 of your ODA to a ORCL database on the node 1 of the same ODA.

      The first thing is to check what you want exactly to do and them choose the most appropriated method (OAKCLI, snapshotdb, duplicate…). If we assume that you want to copy a Prod DB to a new Prod or Test one, then using RMAN duplicate is probably the best.

      To do so the process is:

      1. Run oakcli create dbstorage – Take of the option -cdb in case you database is a Pluggable one
      2. Prepare you spfile for ORCL and take care of the parameters db_create_file_dest, db_create_online_log_dest_1/2 AND remove the parameter control_files from the spfile
      3. Run you RMAN duplicate

      Hope it helps you


  • Praveen says:

    Nice Article .. It helped me.thanks.


  • Thomas R. says:

    Hmm…. why not just copy the controlfile (shutdown, copy, edit param, startup)?

  • Clemens Bleile says:

    In my view the best solution is to let rman take care of it by restoring from the current controlfile:

    SQL> show parameter control_files
    —————- ——— ——————————
    control_files string /u01/app/oracle/oradata/datastore/DBPROD1/DBPROD1/controlfile/o1_mf_bj2r0f4n_.ctl

    SQL> alter system set db_create_online_log_dest_2=’/u01/app/oracle/oradata/datastore/DBPROD1′ scope=both;
    SQL> alter system reset control_files scope=spfile;
    SQL> shutdown immediate
    SQL> startup nomount

    rman target=/
    RMAN> restore controlfile from ‘/u01/app/oracle/oradata/datastore/DBPROD1/DBPROD1/controlfile/o1_mf_bj2r0f4n_.ctl’;

    channel ORA_DISK_1: copied control file copy
    output file name=/u01/app/oracle/oradata/datastore/DBPROD1/DBPROD1/controlfile/o1_mf_f1yzxk0h_.ctl
    output file name=/u01/app/oracle/oradata/datastore/DBPROD1/DBPROD1/controlfile/o1_mf_f1yzxk68_.ctl

    SQL> alter database mount;
    SQL> alter database open;
    SQL> select name from v$controlfile;


    I.e. I do not restore from a backup, but I can just specify the previous controlfile to restore from. This would work for a standby controlfile (i.e. for a standby DB) as well.


  • Jonathan L says:

    I also was looking to multiplex the control files on our ODAs and found this helpful article. One thing to note, though… according to MOS Document – Multiplexing Redolog and Control File on ODA (Doc ID 2086289.1, Oracle recommends that they are *not* multiplexed on ODAs. However, it looks like their rationale is because typically when you multiplex, you want the file to live in physically separate locations. And since the main location where they write the single control file is on the SSD drives, they feel that writing another copy of the control file to a physically separate location will put it on the HDDs, and thus cause performance issues by effectively slowing the control file I/O to HDD rates. I guess that makes sense.

    I like your comment on user error or logical corruptions still being a potential issue. In the MOS article above, apparently starting in, they added a lock feature which should prevent users from accidentally removing the file while the DB is open. As for logical errors, I’m curious if that’s really an issue, assuming the ODA is set to HIGH/triple redundancy? If there is some “cosmic bit flip” that happens, that flip wouldn’t propagate across all three writes, would it?

    Please note, I’m just playing Devil’s Advocate here. We DBAs tend to be a paranoid lot and fully embrace the “belt and suspenders” methodology. 🙂 So perhaps it’s still worthwhile to multiplex the control files on an ODA, it’s just that it would be best to have the second one on the same SSDs? So the triple redundancy is protection from disk failure, and the multiplexing, even though potentially on the same disks, helps prevent from logical corruption?

    Thanks for the article and thanks in advance for entertaining my ramblings. 🙂

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

David Hueber
David Hueber

Chief Executive Officer (CEO), Principal Consultant