Tuesday 26 April 2016

Oracle Data-Pump : Optimising LOB Export Performance



LOBs as we all know that are used to store large, unstructured data, such as video, audio, photo images etc. With a LOB you can store up to 4 Gigabytes of data. They are similar to a LONG or LONG RAW but differ from them in quite a few ways.

Category that LOB can think of is 
Simple structured data
Complex structured data
Semi-structured data
Unstructured data

LOB are effective but it comes with performance impact in few of the scenario.

Lately there was a task related to taking a backup of LOB table<non partitioned.> of size around 41 GB where LOB itself took around 33 GB. There are many options to do backup of the table and team started with EXPDP approach and it took very long time around 3 hours and this was not acceptable.

Lob won’t use parallel at all so there was no point in using parallel degree in expdp command.

Since the activity was critical and clock was ticking, somehow I managed to deduce the way to do it in concurrent way i.e.  Run the data pump in chunks and execute in concurrent way so that each data pump will work on small chunk rather than full volume. Execution in concurrent way in any case will reduce the time.

Here is the very simple and effective script that I designed and to my surprise it reduced the time line from 3 Hours to 10-15 minutes.

What I did is that I logically divided the table based on rowids block using DBMS function in to 10 chunks and these chunks are internally balanced that is data is divided almost equally among the chunks.

Below snippet executed in 10 threads i.e. 10 data pump threads and each one is getting executed concurrently in background. Resulted in 10 different dumps and took just 10-20 minutes for complete export.


#!/bin/bash
chunk=10
for ((i=0;i<=9;i++));
do
expdp USERNAME/Password@DB_NAME TABLES=LOB_TEST QUERY=LOB_TEST:\"where mod\(dbms_rowid.rowid_block_number\(rowid\)\, ${chunk}\) = ${i}\" directory=DMP dumpfile=lob_test_${i}.dmp logfile= log_test_${i}.log &
   echo $i
done 

Since there are 10 different dumps; import will happen in chunk only and it will reduce the time during import activity as well. 


Off course there are other ways to do it like<in relation to expdp>; 

a) DBMS_PARALLEL_EXECUTE where we can define the task and generate the rowids and use those rowids during the export.
b) If table is partitioned then do take the backup partition wise in concurrent mode.
c) Use dbms_rowid.rowid_create.
d) Use Secure file for storing the lob rather than using basic file.

17 comments:

Unknown said...

Nice. Innovative ways using Oracle's internal features. Will try.

Vineet Arya said...

Thanks Bimal.

Unknown said...

How would the import command look in this case?

Vineet Arya said...

Hello @Dmitry sorry for late response. Below is the way to handle; Note IMPDP does have a tendency to lock the segment so to avoid the same have used DATA_OPTIONS=DISABLE_APPEND_HINT

#!/bin/bash
chunk=10
for ((i=0;i<=9;i++));
do
impdp USERNAME/Password@DB_NAME directory=DMP remap_schema=SOURCE_SCHEMA:TARGET_SCHEMA dumpfile=LOB_TEST ${i}.dmp logfile=LOB_TEST _imp_log_test_${i}.log DATA_OPTIONS=DISABLE_APPEND_HINT TABLE_EXISTS_ACTION=TRUNCATE &
done

Unknown said...

Hi Vineet,

Thanks. Had to guess it anyway. Successfully applied multiple times to export/import a 450Gb LOB table (export took ~10 hours). Worked like a charm.

Unknown said...

Also I think TABLE_EXISTS_ACTION has to be APPEND as impdp processes will append to the same table.

Vineet Arya said...

Yes sorry for the Typo. Truncate will truncate the table; better would be use CONTENT=DATA_ONLY

rkorclappsdbastuff said...

Hi Vineet,

Very informative, recently we had a same situation where the table size was 750GB and maximum size was consumed by the LOB data, so we have used the same script but apparently we observed that the rows in the table and the exported rows are different there was a huge difference and there are no errors in the export job, we also imported the table to another database where we found the table is successfully imported without any issues, but we are not able to understand the reason for the difference of rows in the table,

There are no partitions on the table

Does this script freeze the number of rows on the table?

I would appreciate if you could guide us to troubleshoot the situation.

Regards,

Vineet Arya said...

Hello @rkorclappsdbastuff ; thanks for commenting and sorry for delayed response. All my runs were successful with no loss of rows. Can you share what you executed both export and import statements. Also could you please let me know when you executed the run; was it during application downtime? as consistency is very important.

syed said...

Hi Vineet,

Hope you are doing good with your health.

Gone through your above article on optimizing the data pump export job and tried to create the file to test. But, while executing the file am getting the syntax error as below
test.sh[3]: Syntax error at line 3 : `((' is not expected.
Your prompt response will be highly appreciated.

Unknown said...

Thanks for sharing the post... We are trying to enhance our process and able to knock down the export of huge lob segments using query over dbms_rowid.rowid_block_number. But still having hard time getting impdp working in parallel.

The parallel sessions of import having contention "enq: TX - row lock contention" even after using DATA_OPTIONS=DISABLE_APPEND_HINT. Did anyone actually able to overcome the core level row lock contention while importing the lob in same object.

Unknown said...

I have used following impdp option, But still having contention over row lock between these 4 impdp sessions.
Seems like even if we workaround Oracle to perform parallel import into a lobsegment, it will still load the data serially due to row level contention.
impdp dumpfile=LOB_TEST ${i}.dmp logfile=LOB_TEST _imp_log_test_${i}.log content=data_only DATA_OPTIONS=DISABLE_APPEND_HINT TABLE_EXISTS_ACTION=APPEND

SID STATUS EVENT FINAL_BLOCKING_SESSION LAST_CALL_ET STEXT
---------- -------- ------------------------------ ---------------------- ------------ -------------------------------------------------------
17 ACTIVE Datapump dump file I/O 2966 INSERT /*+ PARALLEL("LOB_TEST",1)+*/ INTO RELATIONAL(
26 ACTIVE enq: TX - row lock contention 17 2976 INSERT /*+ PARALLEL("LOB_TEST",1)+*/ INTO RELATIONAL(
36 ACTIVE enq: TX - row lock contention 17 2936 INSERT /*+ PARALLEL("LOB_TEST",1)+*/ INTO RELATIONAL(
791 ACTIVE enq: TX - row lock contention 17 2954 INSERT /*+ PARALLEL("LOB_TEST",1)+*/ INTO RELATIONAL(


Any suggesstions ?

Jonny said...

@Unknown - try CONTENT=DATA_ONLY and remove TABLE_EXISTS_ACTION=APPEND

Vineet Arya said...

Hi.

While doing the import; do not use APPEND as Append will be a segment level lock and hence why you are facing row lock contention.

You can try below

#!/bin/bash
chunk=10
for ((i=0;i<=9;i++));
do
impdp USERNAME/Password@DB_NAME directory=DMP REMAP_TABLE=LOB_TEST:LOB_TEST remap_schema=source:target dumpfile= lob_test_${i}.dmp logfile=TABLE_imp_log_test_${i}.log DATA_OPTIONS=DISABLE_APPEND_HINT CONTENT=DATA_ONLY &
done


Note with 12C you might hit row cache lock bug issue while doing concurrent load especially with 12.2 and bug is Concurrently IMPDP Hang On 'row cache lock' (DC_PROPS) (Doc ID 2417963.1). So apply the patch after confirming with Oracle.

Girish Kulkarni said...

Hi Vineet,

I ran an expdp export using the splitting into 10 chunks Logic for a table of 180GB, it completed within 10 minutes, which otherwise was hanging.

Thanks a lot for your help.

Girish Kulkarni said...

adding COMPRESSION=all and EXCLUDE=statistics parameter for expdp will speed up the export.

Girish Kulkarni said...

Hi Vinit, getting the below error, can you please help:

ORA-31626: job does not exist
ORA-31633: unable to create master table "SYS.SYS_EXPORT_TABLE_05"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1161
ORA-00955: name is already used by an existing object
ORA-06512: at "SYS.KUPV$FT", line 1054
ORA-06512: at "SYS.KUPV$FT", line 1042
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYS.SYS_EXPORT_TABLE_06"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1161
ORA-00955: name is already used by an existing object
ORA-06512: at "SYS.KUPV$FT", line 1054
ORA-06512: at "SYS.KUPV$FT", line 1042

Post a Comment