This post takes a look on how to migrate the Oracle 12cR1 sample schemas to PPAS 9.4 (PostgreSQL Plus Advanced Server 9.4). I’ll not dig into how to install PPAS as this was described in detail some time ago. Just follow this post if you need a setup guide.
If you wonder why I am doing this there are two reasons:
- to see if it works, to have fun and to learn
- PostgreSQL and PPAS are real alternatives to Oracle so migrating applications from one to another can make make a lot of sense
To download the Oracle database examples point your browser to the otn download page and download the “Oracle Database 12c Release 1 Examples (12.1.0.2.0) for Linux x86-64” file.
Installing the sample is quite easy: unzip, start OUI, next, next, next:
oracle@oel12102:/u01/app/oracle/software/ [PROD] unzip linuxamd64_12102_examples.zip oracle@oel12102:/u01/app/oracle/software/ [PROD] cd examples/ oracle@oel12102:/u01/app/oracle/software/examples/ [PROD] ./runInstaller
Once the scripts are available we can install the schemas. The only important point is that the schemas need to be installed in the right order. So I’ll begin with the HR schema:
@?/demo/schema/human_resources/hr_main.sql specify password for HR as parameter 1: Enter value for 1: manager specify default tablespeace for HR as parameter 2: Enter value for 2: users specify temporary tablespace for HR as parameter 3: Enter value for 3: temp specify password for SYS as parameter 4: Enter value for 4: manager specify log path as parameter 5: Enter value for 5: /var/tmp/
After that the OE schema can be installed (You’ll need the Multimedia Option installed for this to succeed):
SQL> @?/demo/schema/order_entry/oe_main.sql specify password for OE as parameter 1: Enter value for 1: manager specify default tablespeace for OE as parameter 2: Enter value for 2: users specify temporary tablespace for OE as parameter 3: Enter value for 3: temp specify password for HR as parameter 4: Enter value for 4: manager specify password for SYS as parameter 5: Enter value for 5: manager specify directory path for the data files as parameter 6: Enter value for 6: /var/tmp/ writeable directory path for the log files as parameter 7: Enter value for 7: /var/tmp/ specify version as parameter 8: Enter value for 8: v3
The we can continue with the PM schema:
SQL> @?/demo/schema/product_media/pm_main.sql specify password for PM as parameter 1: Enter value for 1: manager specify default tablespeace for PM as parameter 2: Enter value for 2: users specify temporary tablespace for PM as parameter 3: Enter value for 3: temp specify password for OE as parameter 4: Enter value for 4: manager specify password for SYS as parameter 5: Enter value for 5: manager specify directory path for the PM data files as parameter 6: Enter value for 6: /u01/app/oracle/product/12.1.0/db_2_0/demo/schema/product_media/ specify directory path for the PM load log files as parameter 7: Enter value for 7: /u01/app/oracle/product/12.1.0/db_2_0/demo/schema/product_media/ specify work directory path as parameter 8: Enter value for 8: /u01/app/oracle/product/12.1.0/db_2_0/demo/schema/product_media/
Then continue with the IX schema:
SQL> @?/demo/schema/info_exchange/ix_main.sql specify password for IX as parameter 1: Enter value for 1: manager specify default tablespeace for IX as parameter 2: Enter value for 2: users specify temporary tablespace for IX as parameter 3: Enter value for 3: temp specify password for SYS as parameter 4: Enter value for 4: manager specify path for log files as parameter 5: Enter value for 5: /u01/app/oracle/product/12.1.0/db_2_0/demo/schema/info_exchange/ specify version as parameter 6: Enter value for 6: v3
And finally the SH schema:
SQL> @?/demo/schema/sales_history/sh_main.sql specify password for SH as parameter 1: Enter value for 1: manager specify default tablespace for SH as parameter 2: Enter value for 2: users specify temporary tablespace for SH as parameter 3: Enter value for 3: temp specify password for SYS as parameter 4: Enter value for 4: manager specify directory path for the data files as parameter 5: Enter value for 5: /u00/app/oracle/product/12.1.0/db_2_0/demo/schema/sales_history/ writeable directory path for the log files as parameter 6: Enter value for 6: /u00/app/oracle/product/12.1.0/db_2_0/demo/schema/sales_history/ specify version as parameter 7: Enter value for 7: v3
Once everything is installed we have the following objects available:
SQL> select owner,object_type,count(*) num_obj from dba_objects where owner in ('SH','PM','OE','IX','HR','BI') group by owner,object_type order by 1,2; OWNER OBJECT_TYPE NUM_OBJ ------------------------------ ----------------------- ---------- HR INDEX 19 HR PROCEDURE 2 HR SEQUENCE 3 HR TABLE 7 HR TRIGGER 2 HR VIEW 1 IX EVALUATION CONTEXT 2 IX INDEX 17 IX LOB 3 IX QUEUE 4 IX RULE SET 4 IX SEQUENCE 2 IX TABLE 17 IX TYPE 1 IX VIEW 8 OE FUNCTION 1 OE INDEX 48 OE LOB 15 OE SEQUENCE 1 OE SYNONYM 6 OE TABLE 14 OE TRIGGER 4 OE TYPE 37 OE TYPE BODY 3 OE VIEW 13 PM INDEX 21 PM LOB 17 PM TABLE 3 PM TYPE 3 SH DIMENSION 5 SH INDEX 23 SH INDEX PARTITION 196 SH MATERIALIZED VIEW 2 SH TABLE 13 SH TABLE PARTITION 56 SH VIEW 1
Having the sample schemas available we are almost ready to start the migration to PPAS 9.4. We’ll use the EDB migration toolkit for this as it automates many tasks. The toolkit itself is documented here. If you do not want to read documentation here is the short version 🙂
As the migration toolkit uses jdbc to connect to the Oracle database we’ll need to download the Oracle jdbc drivers. I used the latest one, which is 12.1.0.2 (ojdbc7.jar) at the time of writing. This jar file needs to be copied to the following location:
enterprisedb@centos7:/home/enterprisedb/ [dummy] ls -la /etc/alternatives/jre/lib/ext/ total 11424 drwxr-xr-x. 2 root root 4096 Nov 25 14:46 . drwxr-xr-x. 9 root root 4096 Nov 25 13:01 .. -rw-r--r--. 1 root root 4003647 Oct 21 22:19 cldrdata.jar -rw-r--r--. 1 root root 9444 Oct 21 22:19 dnsns.jar -rw-r--r--. 1 root root 48732 Oct 21 22:19 jaccess.jar -rw-r--r--. 1 root root 1204407 Oct 21 22:19 localedata.jar -rw-r--r--. 1 root root 617 Oct 21 22:19 meta-index -rw-r--r--. 1 root root 2023751 Oct 21 22:19 nashorn.jar -rw-r--r--. 1 root root 3698857 Nov 25 14:46 ojdbc7.jar <================= -rw-r--r--. 1 root root 30448 Oct 21 22:19 sunec.jar -rw-r--r--. 1 root root 294143 Oct 21 22:19 sunjce_provider.jar -rw-r--r--. 1 root root 266680 Oct 21 22:19 sunpkcs11.jar -rw-r--r--. 1 root root 77887 Oct 21 22:19 zipfs.jar enterprisedb@centos7:/home/enterprisedb/ [dummy]
The connection parameters to the source and the target have to be specified in the toolkit.properties file which is located in the edbmtk/etc directory of the ppas installation:
cat /u01/app/postgres/product/9.4/ppas_1_3/edbmtk/etc/toolkit.properties SRC_DB_URL=jdbc:oracle:thin:@192.168.22.242:1521:PROD SRC_DB_USER=system SRC_DB_PASSWORD=manager TARGET_DB_URL=jdbc:edb://localhost:5444/orasample TARGET_DB_USER=enterprisedb TARGET_DB_PASSWORD=manager
I want the Oracle sample schemas in my own database in PPAS so I created the ORASAMPLE database:
(enterprisedb@[local]:5444) [postgres] > create database orasample; CREATE DATABASE Time: 624.415 ms
Ready for migrating the first schema?
enterprisedb@centos7:/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin/ [CRM] pwd /u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin enterprisedb@centos7:/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin/ [CRM] ./runMTK.sh -fastCopy -logBadSQL -fetchSize 10000 -loaderCount 1 -dropSchema true HR
The result:
Running EnterpriseDB Migration Toolkit (Build 48.0.1) ... Source database connectivity info... conn =jdbc:oracle:thin:@192.168.22.242:1521:PROD user =system password=****** Target database connectivity info... conn =jdbc:edb://localhost:5444/orasample user =enterprisedb password=****** Connecting with source Oracle database server... Connected to Oracle, version 'Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options' Connecting with target EnterpriseDB database server... Connected to EnterpriseDB, version '9.4.1.3' Importing redwood schema HR... Creating Schema...hr Creating Sequence: DEPARTMENTS_SEQ Creating Sequence: EMPLOYEES_SEQ Creating Sequence: LOCATIONS_SEQ Loading Table Data in 8 MB batches... Creating Table: COUNTRIES Loading Table: COUNTRIES ... [COUNTRIES] Migrated 25 rows. [COUNTRIES] Table Data Load Summary: Total Time(s): 0.075 Total Rows: 25 Creating Table: DEPARTMENTS Loading Table: DEPARTMENTS ... [DEPARTMENTS] Migrated 27 rows. [DEPARTMENTS] Table Data Load Summary: Total Time(s): 0.038 Total Rows: 27 Creating Table: EMPLOYEES Loading Table: EMPLOYEES ... [EMPLOYEES] Migrated 107 rows. [EMPLOYEES] Table Data Load Summary: Total Time(s): 0.09 Total Rows: 107 Total Size(MB): 0.0087890625 Creating Table: JOBS Loading Table: JOBS ... [JOBS] Migrated 19 rows. [JOBS] Table Data Load Summary: Total Time(s): 0.011 Total Rows: 19 Creating Table: JOB_HISTORY Loading Table: JOB_HISTORY ... [JOB_HISTORY] Migrated 10 rows. [JOB_HISTORY] Table Data Load Summary: Total Time(s): 0.026 Total Rows: 10 Creating Table: LOCATIONS Loading Table: LOCATIONS ... [LOCATIONS] Migrated 23 rows. [LOCATIONS] Table Data Load Summary: Total Time(s): 0.03 Total Rows: 23 Total Size(MB): 9.765625E-4 Creating Table: REGIONS Loading Table: REGIONS ... [REGIONS] Migrated 4 rows. [REGIONS] Table Data Load Summary: Total Time(s): 0.025 Total Rows: 4 Data Load Summary: Total Time (sec): 0.489 Total Rows: 215 Total Size(MB): 0.01 Creating Constraint: JHIST_EMP_ID_ST_DATE_PK Creating Constraint: EMP_EMP_ID_PK Creating Constraint: EMP_EMAIL_UK Creating Constraint: JOB_ID_PK Creating Constraint: DEPT_ID_PK Creating Constraint: LOC_ID_PK Creating Constraint: COUNTRY_C_ID_PK Creating Constraint: REG_ID_PK Creating Constraint: JHIST_DEPT_FK Creating Constraint: JHIST_EMP_FK Creating Constraint: JHIST_JOB_FK Creating Constraint: DEPT_MGR_FK Creating Constraint: EMP_MANAGER_FK Creating Constraint: EMP_JOB_FK Creating Constraint: EMP_DEPT_FK Creating Constraint: DEPT_LOC_FK Creating Constraint: LOC_C_ID_FK Creating Constraint: COUNTR_REG_FK Creating Constraint: JHIST_DATE_INTERVAL Creating Constraint: EMP_SALARY_MIN Creating Index: LOC_COUNTRY_IX Creating Index: LOC_STATE_PROVINCE_IX Creating Index: LOC_CITY_IX Creating Index: JHIST_DEPARTMENT_IX Creating Index: JHIST_EMPLOYEE_IX Creating Index: JHIST_JOB_IX Creating Index: DEPT_LOCATION_IX Creating Index: EMP_NAME_IX Creating Index: EMP_MANAGER_IX Creating Index: EMP_JOB_IX Creating Index: EMP_DEPARTMENT_IX Creating Trigger: SECURE_EMPLOYEES Creating Trigger: UPDATE_JOB_HISTORY Creating View: EMP_DETAILS_VIEW Creating Procedure: ADD_JOB_HISTORY Creating Procedure: SECURE_DML Schema HR imported successfully. Creating User: HR Migration process completed successfully. Migration logs have been saved to /home/enterprisedb/.enterprisedb/migration-toolkit/logs ******************** Migration Summary ******************** Sequences: 3 out of 3 Tables: 7 out of 7 Constraints: 20 out of 20 Indexes: 11 out of 11 Triggers: 2 out of 2 Views: 1 out of 1 Procedures: 2 out of 2 Users: 1 out of 1 Total objects: 47 Successful count: 47 Failed count: 0 Invalid count: 0 *************************************************************
That was quite easy, wasn’t it? Non of the objects failed to migrate. Lets validate this inside PPAS. I installed PPAS in Oracle compatibility mode and therefore have the dba_* views available:
(enterprisedb@[local]:5444) [postgres] > \c orasample You are now connected to database "orasample" as user "enterprisedb". (enterprisedb@[local]:5444) [orasample] > select object_type,count(*) from dba_objects where schema_name = 'HR' and status = 'VALID'; object_type | count -------------+------- TRIGGER | 2 SEQUENCE | 3 VIEW | 1 PROCEDURE | 2 TABLE | 7 INDEX | 19 (6 rows)
Exactly the same amount of objects as in Oracle, even the PL/SQL procedures are there. You don’t believe it? :
(enterprisedb@[local]:5444) [orasample] > select text from dba_source where schema_name = 'HR'; text --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CREATE OR REPLACE PROCEDURE hr.add_job_history(p_emp_id numeric, p_start_date timestamp without time zone, p_end_date timestamp without time zone, p_job_id character varying, p_department_id numeric) AUTHID DEFINER IS BEGIN INSERT INTO job_history (employee_id, start_date, end_date, job_id, department_id) VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id); END ...
Ok, the HR schema is a simple one. Lets continue with the next one, SH:
enterprisedb@centos7:/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin/ [CRM] pwd /u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin enterprisedb@centos7:/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin/ [CRM] ./runMTK.sh -fastCopy -logBadSQL -fetchSize 10000 -loaderCount 1 -dropSchema true SH
The result:
******************** Migration Summary ******************** Tables: 11 out of 11 Constraints: 14 out of 14 Indexes: 13 out of 13 Views: 1 out of 3 Total objects: 41 Successful count: 39 Failed count: 2 Invalid count: 0 List of failed objects ====================== Views -------------------- 1. SH.FWEEK_PSCAT_SALES_MV 2. SH.CAL_MONTH_SALES_MV
Not bad, but two views are invalid. Why? As specified the “-logBadSQL” switch there is a separate logfile containing all the sql statements which failed:
enterprisedb@centos7:/home/enterprisedb/.enterprisedb/migration-toolkit/logs/ [CRM] pwd /home/enterprisedb/.enterprisedb/migration-toolkit/logs enterprisedb@centos7:/home/enterprisedb/.enterprisedb/migration-toolkit/logs/ [CRM] ls -latr | grep SH -rw-r--r--. 1 enterprisedb enterprisedb 1286 Nov 28 15:30 mtk_bad_sql_SH_20151128032916.sql -rw-r--r--. 1 enterprisedb enterprisedb 8097 Nov 28 15:30 mtk_SH_20151128032916.log
This file contains exactly the statements for the two views that failed to create:
-- MTK-15009: Error Creating Materialized View: FWEEK_PSCAT_SALES_MV -- DB-42601: ERROR: syntax error at or near "PREBUILT" at position 53 -- Line 1: CREATE MATERIALIZED VIEW FWEEK_PSCAT_SALES_MV BUILD PREBUILT -- ^ CREATE MATERIALIZED VIEW FWEEK_PSCAT_SALES_MV BUILD PREBUILT REFRESH FORCE ON DEMAND AS SELECT t.week_ending_day , p.prod_subcategory , sum(s.amount_sold) AS dollars , s.channel_id , s.promo_id FROM sales s , times t , products p WHERE s.time_id = t.time_id AND s.prod_id = p.prod_id GROUP BY t.week_ending_day , p.prod_subcategory , s.channel_id , s.promo_id; -- MTK-15009: Error Creating Materialized View: CAL_MONTH_SALES_MV -- DB-42601: ERROR: syntax error at or near "PREBUILT" at position 51 -- Line 1: CREATE MATERIALIZED VIEW CAL_MONTH_SALES_MV BUILD PREBUILT -- ^ CREATE MATERIALIZED VIEW CAL_MONTH_SALES_MV BUILD PREBUILT REFRESH FORCE ON DEMAND AS SELECT t.calendar_month_desc , sum(s.amount_sold) AS dollars FROM sales s , times t WHERE s.time_id = t.time_id GROUP BY t.calendar_month_desc;
If we take a look at the syntax for create materialized view it becomes clear why this happened:
(enterprisedb@[local]:5444) [postgres] > \h CREATE MATERIALIZED VIEW Command: CREATE MATERIALIZED VIEW Description: define a new materialized view Syntax: CREATE MATERIALIZED VIEW table_name [ (column_name [, ...] ) ] [ WITH ( storage_parameter [= value] [, ... ] ) ] [ TABLESPACE tablespace_name ] AS query [ WITH [ NO ] DATA ]
There syntax is just wrong. Maybe this is a bug in the migration toolkit as it seems the statements are not mapped from Oracle to PPAS syntax. It is easy to fix:
(enterprisedb@[local]:5444) [orasample] > CREATE MATERIALIZED VIEW sh.FWEEK_PSCAT_SALES_MV AS SELECT t.week_ending_day , p.prod_subcategory , sum(s.amount_sold) AS dollars , s.channel_id , s.promo_id FROM sh.sales s , sh.times t , sh.products p WHERE s.time_id = t.time_id AND s.prod_id = p.prod_id GROUP BY t.week_ending_day , p.prod_subcategory , s.channel_id , s.promo_id; SELECT 11266 Time: 8193.370 ms (enterprisedb@[local]:5444) [orasample] > CREATE MATERIALIZED VIEW sh.CAL_MONTH_SALES_MV AS SELECT t.calendar_month_desc , sum(s.amount_sold) AS dollars FROM sh.sales s , sh.times t WHERE s.time_id = t.time_id GROUP BY t.calendar_month_desc; SELECT 48 Time: 396.849 ms
Comparing the amount of objects again we should be fine:
(enterprisedb@[local]:5444) [postgres] > \c orasample You are now connected to database "orasample" as user "enterprisedb". (enterprisedb@[local]:5444) [orasample] > select object_type,count(*) from dba_objects where schema_name = 'SH' and status = 'VALID' group by object_type; object_type | count -------------+------- TRIGGER | 60 VIEW | 1 TABLE | 67 INDEX | 19 (4 rows)
Uh, totally different numbers. Table partitions are counted as tables here and each partition gets a trigger created (that is how partitions are implemented in PostgreSQL). There is no concept of partitioned indexes in PostgreSQL but we can create indexes on the partitions. I am not sure what happened to the dimension as I am not familiar with this on the oracle side (I’ll see that I can check this in more detail soon). At least nothing about that is reported in the log file. You can see, comparing the amount of objects is not longer sufficient for being able to tell if everything was migrated. Special Oracle features need special considerations and can not be migrated automatically. Not everything can be migrated easily or without adjusting the application but the migration toolkit automates a lot of work and can give a picture of what is possible and what not.
The next schemas will be a topic for another post. Hope this helped.