Tuesday, January 29, 2013

Oracle Data Pump

Today I'm going to show how I use Oracle Data Pump utilities. Starting with Oracle 10gR1, Oracle replaced both the export and import utilities by their new Data Pump counterpart : expdp and impdp respectively. The official documentation on these utilities is found in the Oracle Database Utilities10g Release 2 (10.2)

The steps outlined in this blog post are part of a database consolidation effort in which several databases from two different machines will be merged into a new Linux x86_64 server running RedHat Enterprise Linux 5.9 and Oracle RDBMS 10gR2.

The first step in this consolidation effort is of course to install and patch the new Linux x86_64 machine's OS and Oracle RDBMS 10gR2. The latest version of this (rather old) Oracle version is 10.2.0.5.10. I will assume that this as already been done and that an empty database has been created.

Our old Linux server has a database called VPX which is the data repository for VMware VirtualCenter. The only schema of interest in this database is the VPXADMIN schema. So this is the one we will export the data from using Oracle's expdp utility. But before we can do so, this schema needs to be able to read and write to the DATA_PUMP_DIR. But where is that? Let's connect to the VPX database and find out :

rlwrap sqlplus sys@vpx as sysdba
SQL> col directory_path for a40;
SQL> select directory_name, directory_path from dba_directories where directory_name='DATA_PUMP_DIR';

DIRECTORY_NAME   DIRECTORY_PATH
------------------------   ----------------------------------------------------
DATA_PUMP_DIR       /opt/oracle/product/10.2.0_64/admin/vpx/dpdump/

We now know where the dump file will be created. If we don't have enough disk space there or we prefer to send our dump files elsewhere, we can create another directory. This is very simple :

sudo mkdir -p /export/oracle/datafiles
sudo su - oracle
rlwrap sqlplus sys@vpx as sysdba
SQL> create directory dpump_dir as '/export/oracle/datafiles';

Can we use this new directory with both expdp and impdp? Yes, but only from a priviledged user. But here we want to use the VPXADMIN schema, so we must give this user the right to read and write to this directory. Again, that is quite simple :

SQL> grant read, write on directory dpump_dir to vpxadmin;
SQL> exit

We are now ready to export the data (of course, change the password). Notice how we use the same directory name we just created above as the value to the directory= argument.

expdp vpxadmin directory=dpump_dir dumpfile=vpx.dmp logfile=vpx.log

Export: Release 10.2.0.4.0 - 64bit Production on Monday, 28 January, 2013 13:24:33
Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Password :
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user VPXADMIN
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 600
ORA-39080: failed to create queues "KUPC$C_1_20130128132434" and "KUPC$S_1_20130128132434" for Data Pump job
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPC$QUE_INT", line 1606
ORA-04031: unable to allocate 4194344 bytes of shared memory ("streams pool","unknown object","streams pool","fixed allocation callback")


This note explains that to fix this particular problem, we must do this :

rlwrap sqlplus sys@vpx as sysdba
SQL> alter system set streams_pool_size=10M scope=spfile;
SQL> shutdown immediate;
SQL> startup;
SQL> exit

Now if we try again, what do we get?

Export: Release 10.2.0.4.0 - 64bit Production on Monday, 28 January, 2013 13:24:33
Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Password :
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation

Damned! What does that mean? A quick search on MOS turns out Doc ID 1305166.1 - Errors ORA-39002 ORA-39070 ORA-29283 ORA-6512 When Using DataPump Export (EXPDP) or Import (IMPDP).


In our case, we are not using RAC, so it's either a) our schemas don't have read, write access to the directory specified or b) the oracle user does not have read, write access on the directory at the Operating System level. So that's an easy fix :

sudo chown -R oracle:oinstall /export/oracle

If we are on a Windows machine, then Doc ID 858401.1 - DataPump Export (EXPDP) To A Windows Mapped Network Drive Returns Errors ORA-39002 ORA-39070 ORA-29283 ORA-6512 says we need to make sure that both the listener and the database have been started with the exact same username. In this example, in turned out that the listener was started with the DOMAIN\oraclewh user while the database was started with the Local System account. So I changed both of these services to run with the local DBHOST1\oracle user and restarted both services. Once that was done, the export worked.

Now when try we do get the expected result (with a lot of lines removed in this blog)

expdp vpxadmin directory=data_pump_dir dumpfile=vpx.dmp logfile=vpx.log

Export: Release 10.2.0.4.0 - 64bit Production on Monday, 28 January, 2013 13:31:20
Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Password :
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "VPXADMIN"."SYS_EXPORT_SCHEMA_01":  vpxadmin/******** directory=dpump_dir dumpfile=vpx.dmp logfile=vpx.log 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 91.43 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/JOB
. . exported "VPXADMIN"."VPX_EVENT_ARG"                  25.64 MB  387348 rows
. . exported "VPXADMIN"."VPX_EVENT"                      25.92 MB  272741 rows
. . exported "VPXADMIN"."VPX_HOST"                       1.523 MB       2 rows

[... 8< lines deleted 8< ...]


Master table "VPXADMIN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for VPXADMIN.SYS_EXPORT_SCHEMA_01 is:
  /export/oracle/datafiles/vpx.dmp
Job "VPXADMIN"."SYS_EXPORT_SCHEMA_01" successfully completed at 13:32:04

Good, that was better. Now what we must do is transfer this file to the new Linux machine named otto.company.com. That's quite easy.

scp /export/oracle/datafiles/vpx.dmp otto.company.com:/u01/app/oracle/admin/meta/dpump

Before we can import the data, we must take some information on the VPXADMIN user. Why? Because we will need to recreate that user on the target database.

rlwrap sqlplus sys@vpx as sysdba
SQL> select username, default_tablespace, temporary_tablespace, profile from dba_users where username='VPXADMIN';

USERNAME          DEFAULT_TABLESPACE  TEMPORARY_TABLESPACE PROFILE
----------- ------------------- ------------------------------
VPXADMIN    VPX    TEMP DEFAULT

Ah ha! It's using the VPX tablespace as the default. Let's take a look at what this is?

SQL> select file_name, bytes/1024/1024 MB, autoextensible from dba_data_files where tablespace_name='VPX';

FILE_NAME MB AUT
--------------------------------------- ------- ---
/u02/oradata/simone/vpx01.dbf 512 YES

Let's get the user's old password so that we can assign it on the new database.

SQL> select password from dba_users where username='VPXADMIN';

PASSWORD
--------------------------
D3645DAA57EA91AB

With all this information in hand, we can now shutdown this old VPX database instance to make sure clients don't try to connect to it.

rlwrap sqlplus sys@vpx as sysdba
SQL> shutdown immediate;
SQL> exit

We must also change our tnsnames.ora file to point to our new server.


We can't test this new tnsnames because the service name has not yet been configured on the new machine. So let's do this. Connect to the new server and switch to the oracle user.

ssh otto.company.com
sudo su - oracle

Again, we now need to see what's the DATA_PUMP_DIR on this machine.

rlwrap sqlplus sys@meta as sysdba
SQL> select directory_name, directory_path from dba_directories where directory_name='DATA_PUMP_DIR';

DIRECTORY_NAME   DIRECTORY_PATH
------------------------   ----------------------------------------------------
DATA_PUMP_DIR       /u01/app/oracle/product/10.2/db_1/rdbms/log/

SQL> exit

In this case, let's use this directory as is. We thus need to move the dump file into this directory.

mv /u01/app/oracle/admin/meta/dpump/vpx.dmp /u01/app/oracle/product/10.2/db_1/rdbms/log/

As always, in order to use the dump file, we first need to create the user. But to create the user, we need to create the tablespace first. Once the tablespace is created, we create the user and then grant him some rights. The exact rights you need to grant the user depends on the schema requirements of course. This is just an example.

SQL> create tablespace vpx datafile '/u02/oradata/meta/vpx01.dbf' size 512M reuse extent management local segment space management auto;
SQL.> alter tablespace vpx autoextend on next 512K maxsize 2048M;
SQL> create user vpxadmin identified by values 'D3645DAA57EA91AB' default tablespace VPX temporary tablespace temp quota 2048M on vpx ;
SQL> grant create session, create table to vpxadmin;

Ok, don't forget to grant VPXADMIN read, write access to the DATA_PUMP_DIR.

SQL> grant read, write on directory DATA_PUMP_DIR to VPXADMIN;
SQL> exit

We can now import the data into the new database with impdp. We will run the import as the SYSTEM user. This way we should have all the rights required to generate the new data into this new database.

impdp system@meta directory=data_pump_dir dumpfile=vpx.dmp schemas=vpxadmin logfile=vpx.log

Once the import is over, we need to assign the service name VPX to the database.

rlwrap sqlplus sys@meta as sysdba
SQL> show parameter service;

NAME     TYPE     VALUE
------------------------------------ --------------- -----------------------------
service_names     string     meta.company.com, vpx.company.com

SQL> alter system set service_names='meta.company.com, vpx.company.com' scope=both sid='*';
SQL> alter system register;
SQL> exit;

Let's see if the new service name is registered by the listener?

lsnrctl status | grep vpx
Service "vpx.company.com" has 1 instance(s).

Good. Let's check if the original machine can tnsping the new service?

tnsping vpx

TNS Ping Utility for Linux: Version 10.2.0.5.0 - Production on 29-JAN-2013 10:48:06
Copyright (c) 1997,  2010, Oracle.  All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = otto.company.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = vpx.company.com)))
OK (10 msec)

Excellent! :)

Using the Network


But what if the source database server does not have enough disk space to create a local dump file? That happens more often than one might think. Let's say you have a 500 GB database stored on local disks and those disks are full. What do you do now?

You leverage the network with expdp NETWORK_LINK keyword.

In order to use this keyword, our DBA must first create a database link in the target database which enables us to connect to the source database. Once the database link is created, we can start expdp from the target machine, use the network_link keyword to connect to the source database and the dump file will be generated on the target machine which has enough disk space to receive the data. What's more, we can even do this between platforms!

In our example, keep in mind this information :
  • Source database machine : dbhost1.company.com.
  • Source database OS : Microsoft Windows Server 2003 Standard x64 Edition.
  • Source database ORACLE_SID : prefdb.
  • Target database machine : otto.company.com.
  • Target database OS : RedHat Enterprise Linux 5.9 (Tikanga) x86_64
  • Target database ORACLE_SID : orcl.
Let's do this. First we connect to the target machine and setup a new tnsnames.ora entry. Here I only show the new entry, not the entire file.

ssh otto.company.com
sudo su - oracle
vi $ORACLE_HOME/network/admin/tnsnames.ora

<tnsnames.ora>
PREFDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbhost1.company.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = prefdb. company.com)
    )
  )
</tnsnames.ora>

Test to see if we can ping this new entry?

tnsping prefdb

TNS Ping Utility for Linux: Version 10.2.0.5.0 - Production on 07-FEB-2013 10:23:18

Copyright (c) 1997,  2010, Oracle.  All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbhost1.company.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = prefdb.company.com)))
OK (0 msec)

Good! Let's now check where we can store the dump file on this machine. We need about 600 GB of disk space.

df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/os-root   9.7G  2.6G  6.6G  29% /
/dev/mapper/ora-u02   1.6T  9.0G  1.5T   1% /u02
/dev/mapper/os-var    2.0G  192M  1.7G  11% /var
/dev/mapper/os-tmp    3.9G  137M  3.6G   4% /tmp
/dev/mapper/os-u01    307G  3.4G  288G   2% /u01
/dev/sda1             251M   26M  213M  11% /boot
tmpfs                  32G     0   32G   0% /dev/shm
/dev/mapper/fra-u03   3.3T  2.1G  3.1T   1% /u03
angel.company.com:/export/home
                      770G  186G  545G  26% /nfs/home

Let's create our directory in /u03 since we have 3.1 TB of free space.

mkdir -p /u03/export

Next we connect to the target database ORACLE_SID and create the required directory and database_link.

rlwrap sqlplus sys@orcl as sysdba
SQL> create directory dpdir as as '/u03/export';
SQL> select directory_name, directory_path from dba_directories where directory_name='DPDIR';

DIRECTORY_NAME DIRECTORY_PATH
-------------- ------------------------------
DPDIR /u03/export

SQL> create public database link prefdb connect to system identified by changeme using 'prefdb';

Database link created.

Alright, but is this new directory working? To test this, we need to know which schemas we want to export from the source database. In this example, we want to extract the PREF schema. So we query the source database's all_objects table to see how many objects are owned by the PREF schema.

SQL> select count(*) from all_objects@prefdb where owner='PREF';

  COUNT(*)
----------
       229

SQL> exit

Our new database link is now working. We can also double-check that by using the same query from the source database directly instead of using the database link.

We now have everything in place to run expdp from our target machine. In the example below, we also export several other schemas.

expdp system schemas=pref,pref_new,pref_new_ro,pref_read directory=dpdir dumpfile=prefdb.dmp logfile=prefdb.log network_link=prefdb

Export: Release 10.2.0.5.0 - 64bit Production on Thursday, 07 February, 2013 10:12:19

Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Password: 

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/******** schemas=pref,pref_new,pref_new_ro,pref_read directory=dpdir dumpfile=prefdb.dmp logfile=prefdb.log network_link=prefdb 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 538.77 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

[... 8< lines deleted 8< ...]

. . exported "PREF_NEW"."TMP_PROT"            0 KB       0 rows
. . exported "PREF_NEW_RO"."TMP_PROT"         0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
****************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  /u03/export/prefdb.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at 12:16:23

Hummm, we have two errors. The log file shows this :

grep ^ORA /u03/export/prefdb.log 

ORA-31679: Table data object "PREF_NEW"."PARAMETER_FILE" has long columns, and longs can not be loaded/unloaded using a network link
ORA-31679: Table data object "PREF_NEW"."PARAMETER_FILE_OLD" has long columns, and longs can not be loaded/unloaded using a network link

How do we fix this? Well, we can't if we use NETWORK_LINK. That's what sections 4.3 and 8.11 says in Doc ID 553337.1 - Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions [Video]. We appear to hit bug 6630677 which is explained in Doc ID 6630677.8 - IMPDP skips table exported just before table with LONG column using NETWORK_LINK.

So now what do we do? Well, you can a) upgrade the database to 11.2 which is not affected by this bug or b) dump the file to the local file system. 

Since we just want to get rid of this old Windows machine, I'm not going to bother upgrading it. I just installed a USB drive and let it slowly dump to it.

Increasing Speed


In our example above, we exported about 600 GB of data. It took quite some time. Of course, the USB drive was the bottleneck. But let's say we had a normal SATA, SCSI or FC LUN attached to the machine with enough disk space. How can we speed up the export?

Simple : use the PARALLEL keyword.

This keyword enables us to use multiple workers to extract or load data. The idea is to have the same amount of dump files as the number of worker processes. And not to have more workers than the number of available CPUs on the machine.

So, with our current example, our export could be sped up with this command :

expdp system@prefdb directory=dpdir dumpfile=prefdp%u.dmp logfile=prefdb.exp.log schemas=pref,pref_new,pref_new_ro,pref_read parallel=8

Notice how the DUMPFILE keyword has also been modified to use a variable. This enables us to set the number of PARALLEL jobs to use and automatically match this with the number of dump files. Quite handy!

But how much does it make a difference? Well, using a NETWORK_LINK and PARALLEL=1, the previous expdp took 2 h 15 to create a 29 GB dumpfile. Using local storage with PARALLEL=8, these 29 GB took about half an hour. Much faster! :)

But, I had this error on the Windows machine :

UDE-00008: operation generated ORACLE error 31626
ORA-31626: job does not exist
ORA-39086: cannot retrieve job information
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2745
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3712
ORA-06512: at line 1

Another trip to My Oracle Support revealed Doc ID 549781.1 - Data Pump Client Gets UDE-8 ORA-31626 ORA-39086. This doc says that if the log file says the export completed successfully, then we can ignore this error. So let's take a look at the log file...

Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  \\DBHOST1\PREFDB\PREFDP01.DMP
  \\DBHOST1\PREFDB\PREFDP02.DMP
  \\DBHOST1\PREFDB\PREFDP03.DMP
  \\DBHOST1\PREFDB\PREFDP04.DMP
  \\DBHOST1\PREFDB\PREFDP05.DMP
  \\DBHOST1\PREFDB\PREFDP06.DMP
  \\DBHOST1\PREFDB\PREFDP07.DMP
  \\DBHOST1\PREFDB\PREFDP08.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 15:21:37

...and sure enough, the export was successful.

We just need to send those dump files to the target machine and import them. We'll use WinSCP to send the files over from the Windows machine to the RedHat Linux server and dump them in /u03/export.

Once the transfer is done, reconnect to the source database a list the tablespaces that we just exported. The idea here is to list the default tablespaces of the schemas we exported. Which in this example are : PREF, PREF_NEW, PREF_NEW_RO and PREF_READ. The reason is quite simple : impdp will not create the tablespaces for us.

sqlplus sys@prefdb as sysdba
SQL> select username, default_tablespace, temporary_tablespace from dba_users where username like 'PREF%' order by default_tablespace;

USERNAME             DEFAULT_TABLESPACE             TEMPO
-------------------- ------------------------------ -----
PREF                 PREF_DATA                      TEMP
PREF_NEW             PREF_DATA                      TEMP
PREF_NEW_RO          PREF_NEW_RO_DATA               TEMP
PREF_READ            USERS                          TEMP

SQL> select tablespace_name, file_name from dba_data_files where tablespace_name like 'PREF%' order by tablespace_name, file_name;


TABLESPACE_NAME    FILE_NAME
------------------ ---------------------------------------------
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA01.DBF
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA02.DBF
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA03.DBF
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA04.DBF
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA05.DBF
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA06.DBF
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA07.DBF
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA08
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA08.DBF
PREF_DATA          D:\ORADATA\PREFDB\PREF_DATA09.DBF
PREF_IDX           D:\ORADATA\PREFDB\PREF_IDX01.DBF
PREF_IDX           D:\ORADATA\PREFDB\PREF_IDX02.DBF
PREF_IDX           D:\ORADATA\PREFDB\PREF_IDX03.DBF
PREF_IDX           D:\ORADATA\PREFDB\PREF_IDX04
PREF_IDX           D:\ORADATA\PREFDB\PREF_IDX05.DBF
PREF_NEW_RO_DATA   D:\ORADATA\PREFDB\PREF_NEW_RO_DATA01.DBF

So now we know which tablespaces we need to create in the target database before we perform the import. Let's connect to the target server and switch to the Oracle user.

ssh otto.company.com
sudo su - oracle

We then need to connect to the target database and recreate the tablespaces that we wish to import.

-- PREF_DATA creation.

SQL> create tablespace pref_data datafile '/u02/oradata/simone/pref_data01.dbf' size 1024M autoextend on next 1024k maxsize 5120M extent management local;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data02.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data03.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data04.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data05.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data06.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data07.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data08.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data09.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_data add datafile '/u02/oradata/simone/pref_data10.dbf' size 1024M autoextend on next 1024k maxsize 5120M;

-- PREF_IDX creation.

SQL> create tablespace pref_idx datafile '/u02/oradata/simone/pref_idx01.dbf' size 1024M autoextend on next 1024k maxsize 5120M extent management local;
SQL> alter tablespace pref_idx add datafile '/u02/oradata/simone/pref_idx02.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_idx add datafile '/u02/oradata/simone/pref_idx03.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_idx add datafile '/u02/oradata/simone/pref_idx04.dbf' size 1024M autoextend on next 1024k maxsize 5120M;
SQL> alter tablespace pref_idx add datafile '/u02/oradata/simone/pref_idx05.dbf' size 1024M autoextend on next 1024k maxsize 5120M;

-- PREF_NEW_RO_DATA creation.

SQL> create tablespace pref_new_ro_data datafile '/u02/oradata/simone/pref_new_ro01.dbf' size 128M autoextend on next 1024k maxsize 5120M extent management local;

And let's check what we just built.

SQL> select tablespace_name, file_name, bytes/1024/1024 MB from dba_data_files where tablespace_name like 'PREF%' order by tablespace_name, file_name;

TABLESPACE_NAME       FILE_NAME       MB
------------------------------ ------------------------------------------------------- ----------
PREF_DATA       /u02/oradata/simone/pref_data01.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data02.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data03.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data04.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data05.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data06.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data07.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data08.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data09.dbf     1024
PREF_DATA       /u02/oradata/simone/pref_data10.dbf     1024
PREF_IDX       /u02/oradata/simone/pref_idx01.dbf     1024
PREF_IDX       /u02/oradata/simone/pref_idx02.dbf     1024
PREF_IDX       /u02/oradata/simone/pref_idx03.dbf     1024
PREF_IDX       /u02/oradata/simone/pref_idx04.dbf     1024
PREF_IDX       /u02/oradata/simone/pref_idx05.dbf     1024
PREF_NEW_RO_DATA       /u02/oradata/simone/pref_new_ro01.dbf      128
SQL> exit

Ok, we're good to go for the import now.

impdp system schemas=pref,pref_new,pref_new_ro,pref_read parallel=8 dumpfile=PREFDP%u.DMP directory=dpdir logfile=prefimp.log

Import: Release 10.2.0.5.0 - 64bit Production on Friday, 08 February, 2013 11:45:19

Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Password: 

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":  system/******** schemas=pref,pref_new,pref_new_ro,pref_read parallel=8 dumpfile=PREFDP%u.DMP directory=dpdir logfile=prefimp.log 
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
ORA-39083: Object type ROLE_GRANT failed to create with error:
ORA-01919: role 'PREF_NEW_READ' does not exist
Failing sql is:
 GRANT "PREF_NEW_READ" TO "PREF_NEW_RO"
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

[... 8< lines deleted 8< ...]

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW01" prematurely terminated
ORA-31671: Worker process DW01 had an unhandled exception.
ORA-12801: error signaled in parallel query server P001
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
ORA-06512: at "SYS.KUPW$WORKER", line 1423
ORA-06512: at line 2

Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" stopped due to fatal error at 12:05:22

Ah nice, another error! Another trip to MOS reveals Doc ID 1325408.1 - Error ORA-28604 Obtained During DataPump Import. According to this article, we should have had some other debug message. But we just didn't. Even the alert log does not show us anything usefull. So we'll drop the stuff and start again, this time without the parallel keyword.

But before we redo the import, we must first clean-up a little. This will prevent a whole bunch of error messages.

rlwrap sqlplus sys@orcl as sysdba;
SQL> drop tablespace pref_data including contents and datafiles;
SQL> drop tablespace pref_idx including contents and datafiles;
SQL> drop tablespace pref_new_ro_data including contents and datafiles;
SQL> drop user pref cascade;
SQL> drop user pref_new cascade;
SQL> drop user pref_new_ro cascade;
SQL> drop user pref_read cascade;

Then rerun the job without the parallel keyword.

cd /u03/export
impdp system schemas=pref,pref_new,pref_new_ro,pref_read dumpfile=PREFDP%u.DMP directory=dpdir logfile=prefimp.log

That will take care of it. But we will have some database link errors such as these :

ORA-04052: error occurred when looking up remote object STAGE.TABLE@PODS.COMPANY.COM

How do we fix this? Simple, just a) create the tnsnames and then b) login to the database with the user pref_new and create the database link.

vi $ORACLE_HOME/network/admin/tnsnames.ora

<tnsnames.ora>
PODS =
 (DESCRIPTION =
  (ADDRESS = (PROTOCOL = TCP)(HOST = dbhost1.company.com)(PORT = 1521))
   (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = pods.company.com)
   )
 )
</tnanames.ora>

Then create the database link.

rlwrap sqlplus pref_new@orcl
SQL> create database link pods.company.com connect to system identified by ***** using pods;
SQL> exit;

And finally, recompile the schema.

rlwrap sqlplus sys@orcl as sysdba
SQL> exec dbms_utility.compile_schema(schema => 'PREF_NEW');

And make sure this database now answers to the PREFDB call.

SQL> show parameter service;
SQL> alter system set service_names='orcl.company.com, prefdb.company.com' scope=both sid='*';
SQL> alter system register;
SQL> exit

Excellent! Job's done :)

HTH,

DA+

2 comments:

  1. Hi,

    in version 11.2 still remains de ORA-31679.

    Connected to: Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
    With the Real Application Clusters and Automatic Storage Management options
    .
    .
    .
    ORA-31679: Table data object "STIEXP"."PLAN_TABLE" has long columns, and longs can not be loaded/unloaded using a network link

    Cheers

    ReplyDelete
    Replies
    1. Hi Anonymous,

      I don't have access to a RAC/ASM setup on 11.2 at the moment, so I'm unable to provide assistance. If you do find the answer, I'd very much like to learn it.

      Many thanks,

      David

      Delete

Note: Only a member of this blog may post a comment.