Infrastructure at your Service

Daniel Westermann

Performing an initial load with GoldenGate (2) – expdp/impdp

The last post in this series outlined on how to do an initial load of the target database using the “file to replicat” method:

In this post I’ll show how to do the initial load by exporting a schema from the source database, importing that into the target and then start the re-synchronization in GoldenGate.

The initial setup on the source and the target is almost the same. But first lets delete all objects of “HR” in the target database:

  lv_statement varchar2(2000);
  for r in ( select object_name,object_type from dba_objects where owner='HR' )
    case r.object_type
      when 'TABLE' then lv_statement := 'drop table hr.'||r.object_name||' cascade constraints';
      when 'VIEW' then lv_statement := 'drop view hr.'||r.object_name;
      when 'SEQUENCE' then lv_statement := 'drop sequence hr.'||r.object_name;
      when 'PROCEDURE' then lv_statement := 'drop procedure hr.'||r.object_name;
      else lv_statement := null;
    end case;
      execute immediate lv_statement;
    exception when others then null;
  end loop;
select count(*) from dba_objects where owner='HR';

Exactly as in the last post lets setup the initial recording of transactions on the source and target databases. Not much explanation here as all the steps were already covered in the last post.

On the source:

GGSCI ( 1> dblogin useridalias DB1 domain admin
Successfully logged into database.

GGSCI ( as [email protected]) 2> add trandata HR.*
GGSCI ( as [email protected]) 2> edit params extrcdc1

The parameters are:

EXTRACT extrcdc1
USERIDALIAS DB1 domain admin
EXTTRAIL ./dirdat/gg
TABLE hr.*;

Next steps (as in the last post):

GGSCI ( 4> dblogin useridalias DB1 domain admin
GGSCI ( as [email protected]) 5> register extract extrcdc1 database
Extract EXTRCDC1 successfully registered with database at SCN 1863433.
GGSCI ( as [email protected]) 6> add extract extrcdc1, integrated tranlog, begin now
EXTRACT added.
GGSCI ( as [email protected]) 7> add exttrail ./dirdat/gg, extract extrcdc1, megabytes 5

GGSCI ( as [email protected]) 10> edit params dppump1


EXTRACT dppump1
RMTHOST oelgg2, MGRPORT 7809
RMTTRAIL ./dirdat/jj
TABLE hr.*;

Next steps (as in the last post)

GGSCI ( as [email protected]) 12> add extract dppump1, exttrailsource ./dirdat/gg
EXTRACT added.
GGSCI ( as [email protected]) 13> add rmttrail ./dirdat/jj, extract dppump1, megabytes 5
GGSCI ( as [email protected]) 23> start extract *
Sending START request to MANAGER ...
Sending START request to MANAGER ...
GGSCI ( as [email protected]) 24> info all
Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING                                           
EXTRACT     RUNNING     DPPUMP1     00:00:00      00:00:02    
EXTRACT     RUNNING     EXTRCDC1    00:00:07      00:00:07    

On the target system (as in the last post):

GGSCI ( as [email protected]) 26> edit params rplcdd


REPLICAT replcdd
DISCARDFILE ./dirrpt/replccd.dsc, purge
USERIDALIAS DB2 domain admin
MAP hr.*, TARGET hr.*;

Configure the replicat (as in the last post):

GGSCI ( 1> dblogin useridalias DB2 domain admin
Successfully logged into database.
GGSCI ( as [email protected]) 2> add replicat replcdd, integrated, exttrail ./dirdat/jj

Record the SCN of the source database:

SQL> select current_scn from v$database;


Having the scn we can now do an export of the “HR” schema on the source:

SQL> create or replace directory gg as '/var/tmp';

Directory created.

SQL> !expdp userid="' / as sysdba'" schemas=hr dumpfile=exphr.dmp logfile=exphr.log directory=gg flashback_scn=2040227

Transfer this to the target environment and do the import:

SQL> create or replace directory gg as '/var/tmp';

Directory created.

SQL> !impdp userid="' / as sysdba'" schemas=hr dumpfile=exphr.dmp logfile=imphr.log directory=gg

The rest is the same as in the last post. Start the replicat on the target:

GGSCI ( 6> start replicat REPLCDD, aftercsn 2040227

Sending START request to MANAGER ...

Do some modifications on the source:

SQL> update hr.countries set COUNTRY_NAME = 'Zimbabwe11' where COUNTRY_NAME = 'Zimbabwe10';

1 row updated.

SQL> commit;

And then check the modification on the target:

SQL> select * from hr.countries where country_name like '%Zimba%';

-- ---------------------------------------- ----------
ZW Zimbabwe11					     4

SQL> select instance_name from v$instance;



Quite easy if you get the basic steps for configuring the replicat and extracts right.
The next post will look into how you can suspend replication for end of day processing and restart it afterwards automatically.


  • GregG says:

    do we need HANDLECOLLISIONS on replicat in real world scenario ?

  • Daniel Westermann says:

    Hi G,

    as the replicat is stopped in the scenario described here and no modifications happened there should not be any collisions. I’d prefer to have this enabled for production setups when the source database is active during the initial load.


  • Suresh says:


    Is it sufficient if we give the afterscn only while starting the replicate. Is it not required to mention afterscn while starting the extract.


  • Suresh says:

    Hi Daniel,

    We are using Active-Passive dataguard environment and at the time of go-live standby would be converted to Read/Write and then we would use “Replicate aftercsn”

    In our scenario should we create the replicate when the target is still standby and then use “Replicate aftercsn” when the database is in Read/Write
    Create the replicate once the standby is converted to Read/Write and then use “Replicate aftercsn”.



  • Daniel Westermann says:

    Hi Suresh,

    you should use the “standby became primary scn”


  • Suresh says:

    Thanks a lot Daniel

  • CT says:

    What if you have a hot system, active and you can’t afford to stop the app?
    1. Before query for scn, maybe make sure you have no long running transactions?
    2. Then capture scn.
    3. No handlecollisions needed if aftercsn is used. No overlap should exist.
    Key here is that you don’t capture an SCN during a long running transaction?

    • Daniel Westermann says:


      yes, exactly: You have to make sure that you have no transactions running when you catch the scn. This way you don’t need handlecollisions (which is not a good way anyway). Or you take a scan which is earlier than the oldest open transaction. You can check mos note 1347191.1 for more details.


  • Gyan says:

    Hi Daniel,

    We are following Doc id 1276058.1 BP for GG and using expdp for instantiation of target. Our source DB size is 6TB. We started extract and pump process. Now question is when we can start expdp(SCN based) of source DB because extract will take time to capture log for all tables and will add supplemental log coloumns. So is there anyway to check that my source DB is ready with extract process capturing logs for all tables so that no transactions will be missed/lost when we start impdp followed by start of replicat (AFTERCSN clause) process? Please advice.

    Source is running on Solaris 11 on sparc 64 bit
    Target is running on Solaris 11 on sparc 64 bit
    GG version on both source and target is

    Gyan Prakash Bajpai

    • Daniel Westermann says:

      Hi Gyan Prakash Bajpai,

      when you have the extract running all is captured from there on. After that, you need to take care of: At the time you start expdp with a specific SCN there should be no long running transactions (check v$transaction). Typically you do that at a time of the day when there is less load on the system. Does that answer you question?


  • ravi says:

    Excellent documented steps. The steps you have made in here for environemnt wherein source and traget tables are same

    My doubt is we have requirement such table name and schema name between source and target will not be same and it is ok below format

    map hr.table, emp.table_1

    I can do expdp, while impdp and post activities is what i have doubts. I can give remap tablespace,schema and table as well. I can even exclude index and constraints such that post import create an index sql with new set of names and run manually but enabling constraints is what iam kind of doubtful, is there a way wherein it is automatically taken care easily?. Is my sequence of operation correct

    1. create,register and start extract/pmp
    2. creat replicat
    3. expdp table
    4. impdp remap_table remap_schema remap_tableapace exclude=index,constraints
    5. Generate sql ddl and modify owner and table names and then rerun sql in destination
    6. How to specifically enable constraints ?
    7. start replicat repname, aftercsn

    You have any document or blog or implementation example for above scenario ?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Daniel Westermann
Daniel Westermann

Principal Consultant & Technology Leader Open Infrastructure