Saturday 19 November 2016

Oracle 12C: Lob Enhancement & Parallelism Supported.


With Oracle 11g, when creating a table containing LOB data, LOB are stored as a part of  BASIC FILE feature and the default value for the init parameter DB_SECUREFILE is set to PERMITTED.

But in 12C; by default storage is changed to Secure file when compatible parameter is set 12.0 or high and it does support Parallelism <Non partitioned table Insert add on feature>

SQL> sho parameter comp

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      12.1.0

SQL> show parameter db_secure

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_securefile                        string      PREFERRED



SQL> create table CLOB_TEST ( x clob) tablespace POOL_DATA;


SQL>  select table_name, securefile from user_lobs where table_name='CLOB_TEST';

                                                        TABLE_NAME      SEC
--------------------------------
CLOB_TEST       YES


If you change compatible parameter to 11G or <12C then parameter db_securefile default value gets transitioned back to PERMITTED mode; where in LOB will get store in Secure file if and only if it is explicitly mentioned while creating of the segment. 

There is one more condition that we must not forget i.e. the tablespace where you are creating the secure file needs to be be Automatic Segment Space Management (ASSM). In Oracle Database 11g, the default mode of tablespace creation is ASSM so it may already be so for the tablespace. If it's not, then you have to create the Secure File on a new ASSM tablespace.

Same applies to 12C as well i.e. even though db_securefile value is PREFERRED; if your tablespace is not ASSM then LOB will not be store as a part of Secure file.

SQL> select SEGMENT_SPACE_MANAGEMENT,TABLESPACE_NAME from dba_tablespaces;

SEGMEN TABLESPACE_NAME
------ ------------------------------
MANUAL SYSTEM
AUTO   SYSAUX
MANUAL UNDOTBS
MANUAL TEMP
AUTO   POOL_DATA
AUTO   POOL_IX
MANUAL TOOLS

SQL>  create table CLOB_TEST ( x clob) tablespace TOOLS;

Table created.

SQL>  select table_name, securefile from user_lobs where table_name='CLOB_TEST';


TABLE_NAME      SEC
--------------------------------
CLOB_TEST       NO

In Oracle 11g, Parallel Insert was not supported both with Basic and Secure file.

insert /*+ parallel(testclob,4) enable_parallel_dml */  into testclob select /*+ parallel(t,4) full(t) */ * from testclob_pump t;

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name          | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |               |    82 |   321K|     2   (0)| 00:00:01 |        |      |            |
|   1 |  LOAD TABLE CONVENTIONAL | TESTCLOB      |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR         |               |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)   | :TQ10000      |    82 |   321K|     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR    |               |    82 |   321K|     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|   5 |      TABLE ACCESS FULL   | TESTCLOB_PUMP |    82 |   321K|     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------

But starting with 12c, Parallel INSERTs<parallel dml> is allowed into non-partitioned tables with LOB columns provided that those columns are declared as SecureFiles LOBs. 

Let’s start with 12C test case : Created two tables and then inserted the data equivalent a big lob. Note below table is created with Secure file feature<db_securefile=PERMITTED>

CREATE TABLE testclob 
  ( 
     x NUMBER, 
     y  CLOB, 
     z  VARCHAR2(4000) 
  ) tablespace POOL_DATA; 
  
  
  CREATE TABLE testclob_bk
  ( 
     x NUMBER, 
     y  CLOB, 
     z  VARCHAR2(4000) 
  ) tablespace POOL_DATA;


DECLARE 
    textstring CLOB := '123'; 
    i                   INT; 
BEGIN 
    WHILE Length(textstring) <= 60000  LOOP 
        textstring := textstring 
                               || '000000000000000000000000000000000'; 
    END LOOP; 

begin 
     FOR I IN 1..100000 LOOP
    INSERT INTO testclob 
                (x, 
                 y, 
                 z) 
    VALUES     (0, 
                textstring, 
                'done'); 
END LOOP;
commit;
end;
   END; 
/

Here is the insert and you can clearly see Parallel DML is supported as compared to 11g.

insert /*+ parallel(testclob_bk,4) enable_parallel_dml */  into testclob_bk select /*+ parallel(t,4) full(t) */ * from testclob t;

SQL> @plan

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |          |       |       |     2 (100)|          |        |      |            |
|   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |-----PARALLEL DML supported
|   2 |   PX SEND QC (RANDOM)              | :TQ10000 |    82 |   321K|     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT                  |          |       |       |            |          |  Q1,00 | PCWP |            |
|   4 |     OPTIMIZER STATISTICS GATHERING |          |    82 |   321K|     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR             |          |    82 |   321K|     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL            | TESTCLOB |    82 |   321K|     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access(:Z>=:Z AND :Z<=:Z)


Even from below we can see in famous Db file sequential read is replaced with direct path read which says it all.

EVENT Reported on SQL
----------------------------------------------------------------
direct path read
direct path write




SQL_PLAN_LINE_ID BLOCKING_SE SQL_ID          SAMPLE_TIME                    EVT                       CURRENT_OBJ#
---------------- ----------- --------------- ------------------------------ ------------------------- ------------
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.05.195 PM      direct path read                367345
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.06.195 PM      ON CPU                          367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.07.195 PM      log file switch (checkpoi       367192
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.08.195 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.09.205 PM      direct path read                367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.10.215 PM      log file switch (checkpoi       367192
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.11.215 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.12.245 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.13.245 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.14.435 PM      direct path read                367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.15.435 PM      log file switch completio       367192
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.16.445 PM      direct path read                367345
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.17.445 PM      ON CPU                          367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.18.455 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.19.465 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.20.465 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.21.465 PM      direct path read                367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.22.465 PM      log file switch (checkpoi       367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.23.465 PM      log file switch (checkpoi       367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.24.475 PM      log file switch (checkpoi       367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.25.475 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.26.475 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.27.485 PM      direct path read                367345
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.28.485 PM      ON CPU                          367192
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.29.495 PM      ON CPU                          367192
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.30.505 PM      direct path read                367345
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.31.505 PM      ON CPU                          367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.32.535 PM      direct path read                367345
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.33.535 PM      ON CPU                          367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.34.545 PM      log file switch (checkpoi       367345
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.35.555 PM      ON CPU                          367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.36.555 PM      direct path read                367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.37.555 PM      log file switch (checkpoi       367192
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.38.565 PM      ON CPU                          367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.39.575 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.40.575 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.41.575 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.42.575 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.43.585 PM      direct path read                367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.44.595 PM      log file switch completio       367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.45.595 PM      log file switch (checkpoi       367345
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.46.650 PM      ON CPU                          367345
               3 NOT IN WAIT 2kzx34bhh7t17   18-NOV-16 10.49.47.660 PM      ON CPU                          367192
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.48.660 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.49.660 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.50.680 PM      direct path read                367345
               3 VALID       2kzx34bhh7t17   18-NOV-16 10.49.51.680 PM      log file switch (checkpoi       367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.52.680 PM      direct path read                367345
               3 UNKNOWN     2kzx34bhh7t17   18-NOV-16 10.49.53.690 PM      direct path read                367345


SQL> @objnm
Enter value for enter_obj_id: 367345

OWNER            OBJECT_ID OBJECT_NAME                  OBJECT_TYPE       
--------------------------------------------------------------------------------------------------------------------------------
AIM_DBA         367345 SYS_LOB0000367344C00002$$           LOB


SQL> select TABLE_NAME,SEGMENT_NAME from user_lobs where SEGMENT_NAME='SYS_LOB0000367344C00002$$';

TABLE_NAME                SEGMENT_NAME
------------------------------------------------
TESTCLOB                 SYS_LOB0000367344C00002$$
                                                      




I tried to do export as well via Datapump to see if secure feature is able to export the data with parallelism or not; but seems a worthless effort; I see two workers started and only one worked; the other process is just waited. It might get work with partition table but that I leave up to you.

expdp username/password dumpfile=clobdump%u.dmp logfile=dumpfile.log tables=testclob exclude=statistics directory=DUMP parallel=4

Job: SYS_EXPORT_TABLE_01
  Operation: EXPORT
  Mode: TABLE
  State: EXECUTING
  Bytes Processed: 0
  Current Parallelism: 4
  Job Error Count: 0
  Dump File: /DUMP/clobdump01.dmp
    bytes written: 4,096
  Dump File: /DUMP/clobdump%u.dmp

Worker 1 Status:
  Process Name: DW00
  State: WORK WAITING

Worker 2 Status:
  Process Name: DW01
  State: EXECUTING
  Object Schema: AIM_DBA
  Object Name: TESTCLOB
  Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
  Completed Objects: 1
  Total Objects: 1
  Completed Rows: 13,102
  Worker Parallelism: 1

In my view secure file enhancement is very helpful!!

Hope you enjoyed!!!

Tuesday 26 April 2016

Oracle Data-Pump : Optimising LOB Export Performance



LOBs as we all know that are used to store large, unstructured data, such as video, audio, photo images etc. With a LOB you can store up to 4 Gigabytes of data. They are similar to a LONG or LONG RAW but differ from them in quite a few ways.

Category that LOB can think of is 
Simple structured data
Complex structured data
Semi-structured data
Unstructured data

LOB are effective but it comes with performance impact in few of the scenario.

Lately there was a task related to taking a backup of LOB table<non partitioned.> of size around 41 GB where LOB itself took around 33 GB. There are many options to do backup of the table and team started with EXPDP approach and it took very long time around 3 hours and this was not acceptable.

Lob won’t use parallel at all so there was no point in using parallel degree in expdp command.

Since the activity was critical and clock was ticking, somehow I managed to deduce the way to do it in concurrent way i.e.  Run the data pump in chunks and execute in concurrent way so that each data pump will work on small chunk rather than full volume. Execution in concurrent way in any case will reduce the time.

Here is the very simple and effective script that I designed and to my surprise it reduced the time line from 3 Hours to 10-15 minutes.

What I did is that I logically divided the table based on rowids block using DBMS function in to 10 chunks and these chunks are internally balanced that is data is divided almost equally among the chunks.

Below snippet executed in 10 threads i.e. 10 data pump threads and each one is getting executed concurrently in background. Resulted in 10 different dumps and took just 10-20 minutes for complete export.


#!/bin/bash
chunk=10
for ((i=0;i<=9;i++));
do
expdp USERNAME/Password@DB_NAME TABLES=LOB_TEST QUERY=LOB_TEST:\"where mod\(dbms_rowid.rowid_block_number\(rowid\)\, ${chunk}\) = ${i}\" directory=DMP dumpfile=lob_test_${i}.dmp logfile= log_test_${i}.log &
   echo $i
done 

Since there are 10 different dumps; import will happen in chunk only and it will reduce the time during import activity as well. 


Off course there are other ways to do it like<in relation to expdp>; 

a) DBMS_PARALLEL_EXECUTE where we can define the task and generate the rowids and use those rowids during the export.
b) If table is partitioned then do take the backup partition wise in concurrent mode.
c) Use dbms_rowid.rowid_create.
d) Use Secure file for storing the lob rather than using basic file.

Tuesday 19 April 2016

CLUSTERING FACTOR AND TABLE_CACHED_BLOCKS: ORACLE 12C FEATURE


Week off;  somehow I managed to get time to write on 12C behaviour. Core idea was to gather performance artefacts about concurrent insert scenarios but to surprise I went on to discover clustering factor and its internals. I tested this in 12.1.0.1.0

Have created one table; On initial basis I tried just running regular insert from multiple sessions and to see it how it goes.

create table TEMP_DATA (id number, name varchar2(30)) tablespace POOL_DATA;
  
create sequence TEMP_DATA_seq order;

CREATE OR REPLACE PROCEDURE temp_data_proc AS
BEGIN
FOR i IN 1..100000 LOOP
INSERT INTO TEMP_DATA VALUES (TEMP_DATA_seq.NEXTVAL, 'VINEET');
COMMIT;
END LOOP;
END;
/

now from 2 different sessions I triggered the procedure to load the table.

Sess1: exec temp_data_proc;

Sess2: exec temp_data_proc;

Then created index

create index IND_IXX on TEMP_DATA(id);

Gathered the stats and was really surprised to see the clustering factor of Index.

SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
FROM user_tables t, user_indexes i
WHERE t.table_name = i.table_name AND i.index_name='IND_IXX';

TABLE_NAME                     INDEX_NAME                         BLOCKS   NUM_ROWS CLUSTERING_FACTOR
------------------------------ ------------------------------ ---------- ---------- -----------------
TEMP_DATA                      IND_IXX                               496     200000            116653

Second Approach: Dropped and recreated all the object as mentioned above and ran the insert procedure from single session one by one i.e.

executed 2 times temp_data_proc in same session; the clustering factor here is way less.


SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
FROM user_tables t, user_indexes i
WHERE t.table_name = i.table_name AND i.index_name='IND_IXX';

TABLE_NAME                     INDEX_NAME                         BLOCKS   NUM_ROWS CLUSTERING_FACTOR
------------------------------ ------------------------------ ---------- ---------- -----------------
TEMP_DATA                      IND_IXX                               496     200000               441------------à Drastically reduced.

Drilling down more; and noted that running insert from different session can be a problem as each one work on slight different free list so they might pick different block. While in one session one free list is active and once it is over it picks different free list which automatically deals like you have blocks are accessed in right order.

so while creating Index in case of running multiple insert from different session; oracle will do lot of back and forth while generating clustering factor and hence it generated high clustering factor. In case of same session insert with same volume; blocks were accessed in ordered form and hence it generates less clustering factor.

Index stores rowid and data; now how it increments clustering factor?

Basically rowid is representation of blocks and files etc; so while generating clustering factor it check the reference block and see if it same or different from previous one;
if it is different than it increment it by 1 and so this process go on and on.

So this large clustering factor seems an absurd behaviour as far as concurrent session scenario is concerned;  to come up for this 12C introduced  TABLE_CACHED_BLOCKS ; The purpose of the table_cached_blocks parameter is set using set_table_prefs and it helps to control the computed clustering_factor when running the dbms_stats utility and this column is not present in 11.1.0 to 11.2.0.3; not sure about 11.2.0.4. in Version 11g this basically comes up after applying Oracle patches.

TABLE_CACHED_BLOCKS: default is 1 and max is 255; for my test I used 255 so that there will be a very little scope for clustering factor to go wrong.

TEST case : with TABLE_CACHED_BLOCKS

Drop table  TEMP_DATA;

Drop  sequence TEMP_DATA_seq;

create table TEMP_DATA (id number, name varchar2(30)) tablespace POOL_DATA;
 
  
create sequence TEMP_DATA_seq order;

CREATE OR REPLACE PROCEDURE temp_data_proc AS
BEGIN
FOR i IN 1..100000 LOOP
INSERT INTO TEMP_DATA VALUES (TEMP_DATA_seq.NEXTVAL, 'VINEET');
COMMIT;
END LOOP;
END;
/

exec dbms_stats.set_table_prefs(ownname=>user, tabname=>'TEMP_DATA',pname=>'TABLE_CACHED_BLOCKS', pvalue=>255);

next have 2 different sessions where we run the procedure concurrently.

Sess1: exec temp_data_proc;

Sess2: exec temp_data_proc;

Then created index again

create index IND_IXX on TEMP_DATA(id);

Gathered the stats and I see clustering factor is quite similar to single session and is optimal.

SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
FROM user_tables t, user_indexes i
WHERE t.table_name = i.table_name AND i.index_name='IND_IXX';


TABLE_NAME                     INDEX_NAME                         BLOCKS   NUM_ROWS CLUSTERING_FACTOR
------------------------------ ------------------------------ ---------- ---------- -----------------
TEMP_DATA                      IND_IXX                               496     200000               468


I Hope you liked it..

Enjoy Learning....



Thursday 14 January 2016

ASM DISKGROUP RENAMING OCR AND VOTING DISK

It’s been long since i shared something related to core competency. So here is snippet of it; going through Jet Lag I somehow manage to brush up some core skills on ASM 11gr2 RAC machine.

The whole idea was to rename the disk-group of Data File<+DATA>and I started working on it; most of you already faced this issue so pardon me and let me know if think in some or the other way. Going through I came across the flaw in my already created design on virtual machine and this flaw didn’t allowed me to rename the disk-groups.

Version : 11.2.0.2
Linux: 5.8
Database: 2 Nodes RAC SYSTEM.
Storage: ASM with UDEV configuration.

Here are the steps that I performed;

1)      Switched to clusterware
2)      Gathered all the information about the Instance on the node and ASM configuration. I used srvctl command to do so;

[oracle@rac1 bin]$ ./srvctl config database -d RAC
Database unique name: RAC
Database name: RAC
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/RAC/spfileRAC.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RAC
Database instances: RAC1,RAC2
Disk Groups: DATA
Services:
Database is administrator managed

[oracle@rac1 bin]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.

Just to add Even if you are managing a non-RAC database, in 11g, it is recommended to use “srvctl” commands to start and stop the database. But, if the database is not created using DBCA or not upgraded using DBUA, the database will not be registered in the “Oracle Restart” configuration automatically.

Oracle Restart monitors the services registered and restarts if there is an abnormal end or can stop and start the databases during server reboot. Please read Oracle Documentation to learn more about Oracle Restart.

3)      Prepared Database configuration: So just collected up and didn’t executed it.

Since I am changing the disk-group; on DB side I have to rename the files too with New disk group<Here DGDATA>

-----Log file Renaming--------
ALTER DATABASE RENAME FILE '+DATA/rac/onlinelog/group_2.264.892205005' TO '+DGDATA1/rac/onlinelog/group_2.264.892205005';
ALTER DATABASE RENAME FILE '+DATA/rac/onlinelog/group_2.265.892205007' TO '+DGDATA1/rac/onlinelog/group_2.265.892205007';
ALTER DATABASE RENAME FILE '+DATA/rac/onlinelog/group_1.262.892205003' TO '+DGDATA1/rac/onlinelog/group_1.262.892205003';
ALTER DATABASE RENAME FILE '+DATA/rac/onlinelog/group_1.263.892205005' TO '+DGDATA1/rac/onlinelog/group_1.263.892205005';
ALTER DATABASE RENAME FILE '+DATA/rac/onlinelog/group_3.269.892205175' TO '+DGDATA1/rac/onlinelog/group_3.269.892205175';
ALTER DATABASE RENAME FILE '+DATA/rac/onlinelog/group_3.270.892205175' TO '+DGDATA1/rac/onlinelog/group_3.270.892205175';
ALTER DATABASE RENAME FILE '+DATA/rac/onlinelog/group_4.271.892205175' TO '+DGDATA1/rac/onlinelog/group_4.271.892205175';
ALTER DATABASE RENAME FILE '+DATA/rac/onlinelog/group_4.272.892205177' TO '+DGDATA1/rac/onlinelog/group_4.272.892205177';

-----Data file Renaming-------
ALTER DATABASE RENAME FILE '+DATA/rac/datafile/system.256.892204895' TO '+DGDATA1/rac/datafile/system.256.892204895';
ALTER DATABASE RENAME FILE '+DATA/rac/datafile/sysaux.257.892204897' TO '+DGDATA1/rac/datafile/sysaux.257.892204897';
ALTER DATABASE RENAME FILE '+DATA/rac/datafile/undotbs1.258.892204897' TO '+DGDATA1/rac/datafile/undotbs1.258.892204897';
ALTER DATABASE RENAME FILE '+DATA/rac/datafile/users.259.892204899' TO '+DGDATA1/rac/datafile/users.259.892204899';
ALTER DATABASE RENAME FILE '+DATA/rac/datafile/example.267.892205013' TO '+DGDATA1/rac/datafile/example.267.892205013';
ALTER DATABASE RENAME FILE '+DATA/rac/datafile/undotbs2.268.892205125' TO '+DGDATA1/rac/datafile/undotbs2.268.892205125';

-----Control File----
ALTER DATABASE RENAME FILE '+DATA/rac/tempfile/temp.266.892205009' TO '+DGDATA1/rac/tempfile/temp.266.892205009'

4)      Prepare new initialization parameter file say any backup file and replace below
.control_files
.db_create_file_dest
.db_create_online_log_dest_1
.db_create_online_log_dest_2
.db_recovery_file_dest
and all other parameters containing old DISK GROUPS names
5)       Stop databases
$ Switch to clusterware

srvctl stop database -d RAC

Disable Oracle Restart so that it does not restart any component. 'crsctl status resource' can be used to check status of resources. As you can see I have stopped database but  ASM Diskgroup DATA is still mounted.

./crsctl disable has

Now came the pain-point for me and here I realize the flaw in my RAC ASM creation;

Pre-requisite for renaming is that disk-group should be dismounted. So have used ASMCMD command to dismount the diskgroup +DATA

ASMCMD> umount DATA
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DATA" precludes its dismount (DBD ERROR: OCIStmtExecute)

This led me to new corner of ASM; I checked what all clients are connected to ASM Db.


SQL> select instance_name,db_name,status  from v$asm_client;

INSTANCE_NAME                                                    DB_NAME  STATUS
---------------------------------------------------------------- -------- ------------
+ASM1                                                            +ASM     CONNECTED

This didn’t give me any direction; just that ASM is connected and probably using spfile so this I need to remove in any case. But for this also I need to shut the Db down which was not allowing me to do so.

So I checked what is actually hindering this?

I went back to asmcmd command and used lsof command which tell which file is in use.

[oracle@rac1 bin]$ asmcmd lsof
DB_Name  Instance_Name  Path                 
+ASM     +ASM1          +data.255.4294967295

So 255 file number is having the issue but what is this file? And Why it is hindering me to dismount the disk-group. Here v$asm_file dictionary comes in to play


SQL> select group_number, file_number, bytes, type from v$asm_file;

GROUP_NUMBER FILE_NUMBER      BYTES TYPE
------------ ----------- ---------- ----------------------------------------------------------------
           1         255  272756736 OCRFILE


So the blunder was:  I created only one +DATA Diskgroup and this disk group was storing all clusterware and DB files <Spfile,OCR/VOTING DISK/DATAFILE,REDO LOG etc>

Now it was clear what is the issue; but how to solve is again a challenge for me; I Checked documentation and found out that there was command for replacing voting disk/OCR. In 11gR2 you can run this command while your cluster is up and running but for replacing or adding you need to have a new disk-group.

My virtual box was self-contained with only 3 shared disk<asm-disk1,2,3> and all these were created as part of Normal redundancy of +DATA disk-group. And as I mentioned this is the disk-group was storing OCR and Voting Disk.

Now I decide to add new shared disk<Block device> using UDEV functionality have probe new partition as /dev/sde1 (There is procedure to convert Disk device to discoverable ASM disk you can find it on Internet so am not discussing it here); When done created external redundant disk so that I can save OCR/Voting Disk/Spfile to it.

So I logged on to asm instance<Node 1> and executed below.

CREATE DISKGROUP DGDATA EXTERNAL REDUNDANCY
DISK '/dev/asm-disk4'
  ATTRIBUTE 'au_size'='1M';

Don’t forget to mount the diskgroup to NODE 2; as Oracle wont mount it automatically. If you forget to do this then you may not able to crack OCR to other Disk-group and will fail <PROT-28: Cannot delete or replace the only configured Oracle Cluster Registry location>

Anyways first will work on Voting disk J


Once diskgroup created I used below command to replace the voting Disk and I failed againL


./crsctl replace votedisk +DGDATA 


I checked alert log and I see ASM required compatibility of 11.2 and higher.

NOTE: diskgroup resource ora.DGDATA.dg is online
Fri Jan 15 01:14:46 2016
NOTE: updated gpnp profile ASM diskstring: /dev/asm*
Fri Jan 15 01:14:46 2016
NOTE: Creating voting files in diskgroup DGDATA
Fri Jan 15 01:14:46 2016
NOTE: Voting File refresh pending for group 2/0x15f8a1da (DGDATA)
NOTE: Attempting voting file creation in diskgroup DGDATA
ERROR: Voting file allocation failed for group DGDATA
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_5428.trc:
ORA-15221: ASM operation requires compatible.asm of 11.2.0.0.0 or higher
"alert_+ASM1.log" 3090L, 125974C  


And I checked the new diskgroup has the compatibility of 10.1 and why this; so this basically happens when creating diskgroup via ASM Instance<sqlplus / as sysasm> and not providing attributes keys; so ASM instance allocates default 10.1 for 11G Db’s. If you create diskgroup by using ASMCMD command than this will automatically give correct compatibility in compare to sqlplus version. So its ups to you whichever suits youJ

SQL> select name,COMPATIBILITY,DATABASE_COMPATIBILITY from V$ASM_DISKGROUP where name='DGDATA';


NAME                           COMPATIBILITY                                                DATABASE_COMPATIBILITY
------------------------------ ------------------------------------------------------------ ------------------------------------------------------------
DGDATA                         10.1.0.0.0                                                   10.1.0.0.0

Hence now will do it manually alteration of diskgroup.

SQL> alter diskgroup data set ATTRIBUTE 'COMPATIBLE.RDBMS'='11.2';

Diskgroup altered.
                                                                                        
SQL> alter diskgroup data set ATTRIBUTE 'COMPATIBLE.ASM'='11.2';

Diskgroup altered.

Tried again with replace voting disk command and whoaaaa; successful J


./crsctl replace votedisk +DGDATA

Successful addition of voting disk 0ebedd9fff6e4f38bf8fd7a759cbbf0c.
Successful deletion of voting disk 718d42b5bb834f62bf97ca62131fe296.
Successful deletion of voting disk 89c65fbddc6a4f3cbf82c445bf67e116.
Successful deletion of voting disk 318cf7d9d79a4fd7bf62ebe2c930402d.
Successfully replaced voting disk group with +DGDATA.

Crosschecked at clusterware level:---à  Half way through

./crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   0ebedd9fff6e4f38bf8fd7a759cbbf0c (/dev/asm-disk4) [DGDATA]


For OCR ; just checked status of OCR where it points          

./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2688
         Available space (kbytes) :     259432
         ID                       : 1171687482
         Device/File Name         :      +DATA-à  see here OCR is pointing to +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded


Run below command to add OCR to DGDATA


./ocrconfig -add +DGDATA      

Check the status again; you will see OCR is attached to both disk-group so we have to delete it from +DATA and this is what I wanted.

./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2688
         Available space (kbytes) :     259432
         ID                       : 1171687482
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
         Device/File Name         :    +DGDATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

                                  
Now to delete the OCR from +DATA


[root@rac1 bin]# ./ocrconfig -delete +DATA

[root@rac1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2688
         Available space (kbytes) :     259432
         ID                       : 1171687482
         Device/File Name         :    +DGDATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded.

So finally I managed to put OCR on +DGDATA; now I can think of dismounting  the DATA diskgroup; but again it failed and the reason is clear the ASM instance was started using spfile and it was part of +DATA so this to be moved too.

ASMCMD> umount DATA
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DATA" precludes its dismount (DBD ERROR: OCIStmtExecute)
                                                             
So this can be done via creating first Pfile and then use Pfile for creating Spfile.


SQL> create pfile from spfile;

SQL> create spfile='+DGDATA' from pfile;


To drop the diskgroup , you will need to restart the cluster on both the nodes as ASM instance is still using the file. Run following commands on both the nodes

./crsctl stop cluster

./crsctl start cluster

Verify that OCR location is pointing to the correct diskgroup on all nodes before starting cluster.


[oracle@rac1 bin]$ cat /etc/oracle/ocr.loc

#Device/file +DATA getting replaced by device +DGDATA
ocrconfig_loc=+DGDATA

and finally trying to UMOUNT DATA

ASMCMD> umount DATA-------à +DATA now dismounted. Wow managed to do it quite a learningJ

ASMCMD>

Rest Data-file and renaming now easy; which I believe I can leave it to you.

Hope you enjoyed it!!!


Enjoy Learning!!!