I’m currently working in a project to migrate a 11g-Datawarehouse-database from Solaris to Linux Intel (Endian-change). We do use the cross platform incremental backups method as described in My Oracle Support Note 1389592.1 for that.
I.e. incremental backups are applied to database files and during the migration the recovered datafiles are attached to the target database via the transportable tablespace method. When testing the transportable tablespace metadata import I got an error for a table:


Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"DWHT"."TDWHTAB" failed to create with error:
ORA-01843: not a valid month
Failing sql is:
CREATE TABLE "DWHT"."TDWHTAB" (...,"PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE,...

I.e. metadata of all tables in the tablespace were successfully imported except one. The reason for the error was a default-defintion of column “PROCESSING_TM” without a TO_DATE-casting. I.e. it relied on the NLS_DATE_FORMAT-setting. E.g.


SQL> create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE);
create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE)
  *
ERROR at line 1:
ORA-01843: not a valid month
 
SQL> alter session set nls_date_format='dd-mm-yyyy';
 
Session altered.
 
SQL> create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE);
 
Table created.

There are different possibilities on why a metadata import fails for specific tables (see e.g. My Oracle Support Note 1082116.1 for another reason). The question was: How to restart the import without going through the whole process of copying datafiles from the source-database again? As we imported the metadata through a network-link the tablespace on the source database and the tablespace on the target database were still set to “READ ONLY”.

So first of all I fixed the error in the table definition on the source database:


alter table "DWHT"."TDWHTAB" modify ("PROCESSING_TM" DATE DEFAULT TO_DATE('01-01-0001', 'DD-MM-YYYY'));

That was possible even having the associated tablespace of the table on READ ONLY as it is just a metadata update and does not touch the tablespace.

On the target database I cleaned up the tablespace of the failed table metadata import:


drop tablespace DWH_DATA including contents;

It's of course important to NOT specify "AND DATAFILES" when dropping the tablespace to avoid having to copy all files again.

In my case I actually transported 146 tablespaces and modified the impdp-parameterfile xttplugin.par to contain only the single tablespace, which contained failing tables. Of course you have to drop and include all tablespaces, which have tables with dependencies between each other. In my case that was not an issue as my tables had no dependencies to tables in other tablespaces.

Afterwards I could just restart my metadata-import with the single tablespace:


$ cat xttplugin2.par
directory=DATA_PUMP_DIR
logfile=tts_imp2.log
network_link=ttslink
transport_full_check=no
transport_tablespaces=DWH_DATA
transport_datafiles='/u01/DWH/oradata/DWH_DATA_001.dbf','/u01/DWH/oradata/DWH_DATA_002.dbf'
$ imdp parfile=./xttplugin2.par
Password: / as sysdba

So to restart a failed transportable tablespace metadata import, just fix the root cause, drop the associated tablespace on the target database without dropping the datafiles and restart the import again. That becomes handy especially in cross platform incremental backup migration scenarios.