Wednesday, November 29, 2006

Size of ASM files

Whilst testing compression of archivelogs on ASM I was looking for ways to find the size of an archivelog stored on ASM

You can

• Use the bytes column in v$asm_file.
• Use the asmcmd line utility and use the du command there in
• Use an API called dbms_diskgroup as shown below

SQL> select thread#,blocks,compressed from v$archived_log where thread#=2 and SEQUENCE#=19


THREAD# SEQUENCE# BLOCKS COM
---------- ---------- ---------- ---
2 19 3794 YES

[oracle@dbrac2 ~]$ asmcmd
ASMCMD> cd ARCHIVELOG
ASMCMD> ls
2006_11_28/
ASMCMD> cd 2006_11_28
ASMCMD> ls
thread_1_seq_10.289.607731765
thread_1_seq_11.293.607732151
thread_1_seq_12.296.607732301
thread_1_seq_8.286.607731393
thread_1_seq_9.287.607731413
thread_2_seq_13.288.607731413
thread_2_seq_14.290.607731771
thread_2_seq_15.291.607732073
thread_2_seq_16.292.607732151
thread_2_seq_17.294.607732277
thread_2_seq_18.295.607732301
thread_2_seq_19.297.607732423

set serveroutput on
declare
vfilename varchar2(4000);
vfilesize number;
vlbks number;
vfiletype number;
begin
dbms_output.enable(5000);
vfilename := '&file_name';
dbms_diskgroup.getfileattr(vfilename,vfiletype,vfilesize,vlbks);
dbms_output.put_line('File: '||vfilename); dbms_output.new_line;
dbms_output.put_line('Size (Logical Block Size): '||vfilesize); dbms_output.new_line;
dbms_output.put_line('Logical Block Size: '||vlbks); dbms_output.new_line;
dbms_output.put_line('File type: '||vfiletype); dbms_output.new_line;
end;
/


Enter value for file_name: +ORADATA1/TEST/ARCHIVELOG/2006_11_28/thread_2_seq_19.297.607732423
old 9: vfilename := '&file_name';
new 9: vfilename := '+ORADATA1/TEST/ARCHIVELOG/2006_11_28/thread_2_seq_19.297.607732423';
File: +ORADATA1/TEST/ARCHIVELOG/2006_11_28/thread_2_seq_19.297.607732423
Size (Logical Block Size): 3794
Logical Block Size: 512
File type: 4

PL/SQL procedure successfully completed.

Tuesday, November 28, 2006

Recovery using compressed archivelogs

A couple of days ago I mentioned about an undocumented feature where by it is
possible to generate compressed archivelogs

I did manage to test a couple of scenarios where I did manage to recover my database
with a mixture of compressed and non compressed archivelogs.

I guess this begs the question as to why this feature is not documented.

Sunday, November 26, 2006

Archivelog compression?

Apparently there is a command which could generate compressed archivelogs which goes something like this.

SQL> ALTER DATABASE ARCHIVELOG COMPRESS enable;

Database altered.

I have skimmed through the documentation but can't seem to find a reference to the same but the view v$archived_log seems to have a column called compressed which from the docs is

COMPRESSED VARCHAR2(3) This column is reserved for internal use only

I am assuming this is related to the underscore parameter _log_archive_compress_enable the description of which is below.
But changing the value of this parameter has not prevented the command from
working.

SQL> REM Formatting columns
SQL> set lines 9999
SQL> col indx format 9999
SQL> col inst_id heading "INST" format 9999
SQL> col ksppinm heading "NAME" format a30
SQL> col ksppdesc heading "DESC" format a60
SQL> col ksppstvl heading "CURR VAL" format a15
SQL> col ksppstdf heading "DEFAULT VAL" format a15

SQL> select v.indx,v.inst_id,ksppinm,ksppstvl,ksppstdf,ksppdesc from x$ksppi i ,
x$ksppcv v where i.indx=v.indx and ksppinm like '_log_archive_compress_enable';

INDX INST NAME CURR VAL DEFAULT VAL DESC
----- ----- ------------------------------ --------------- --------------- -------------------------
473 1 _log_archive_compress_enable FALSE TRUE Bypass database rules for enabling archiv

1 row selected.

So by default archivelogs are not compressed.
SQL> select distinct compressed from v$archived_log;

COM
---
NO

So I used this command, generated a few archivelogs and got the following message in one of the trace files.

Archivelog compression complete.
Input: 3139072 bytes Output: 810830 bytes
Compression Performance: 74.17 percent or 2.07 bits per byte

Lets verify the same.
SQL> alter database archivelog compress disable;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> create table scott.obj$ as select * from dba_objects;

Table created.

SQL> alter system switch logfile;

System altered.


SQL> alter database archivelog compress enable;

Database altered.


SQL> create table frego.obj$ as select * from dba_objects;

Table created.

SQL> alter system switch logfile;

System altered.

SQL> select sequence#,blocks,compressed from v$archived_log where sequence#>225;

SEQUENCE# BLOCKS COM
---------- ---------- ---
226 3820 NO
227 1 NO
228 11579 NO
229 83 YES
230 8 YES
231 1911 YES

6 rows selected.

If you compare the 2 logs of interest (228 and 231 since there were other commands in the interim) you find a compression performance similar to the one indicated above.

It would be wise to check if I can recover using compressed archivelogs though.

P.S This is an undocumented feature (AFAIK) and hence I do not advise anyone to use the same in a Production environment.

Block Change Tracking

If anybody is using the new 10G backup feature of incremental backups using a block change tracking file or thinking of doing so I sincerely recommend reading Alex Gorbachev's presentation

Tuesday, November 21, 2006

Converting database at target

There was a valid response for yesterday's post by The Human Fly where he suggested that this method was not feasible for large databases because of the need for a staging area to hold the converted files.
Well according to Oracle disk space is cheap :-) and I know a lot of bugs that have been closed on this premise (where the workaround has meant additional disk space)

But seriously you can get around this

• You can take a backup of the source database to tape and then overwrite the datafile location (assuming you don’t need the source database anymore)

CONVERT DATABASE NEW DATABASE 'USYD'
transport script 'C:\temp\newtransportscript'
to platform 'Linux 64-bit for AMD'
db_file_name_convert 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\USYD\' 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\USYD\'
;

It is also possible to convert the database at the target to free up CPU on the source. You need to run the following on the source database

CONVERT DATABASE ON TARGET PLATFORM
CONVERT SCRIPT 'C:\temp\convertscript.rman'
TRANSPORT SCRIPT 'C:\temp\transport_at_target.sql'
new database 'USYD'
FORMAT 'C:\temp\target\%U';

This will create 2 scripts

• convertscript.rman which will have convert commands for each datafile of the database to be converted.


• transport_at_target.sql contains the syntax to create the controlfile and start the database at the target

LINUX64>>startup nomount pfile='/u01/oradata/USYD/initUSYD.ora'
ORACLE instance started.

Total System Global Area 171966464 bytes
Fixed Size 2069712 bytes
Variable Size 113249072 bytes
Database Buffers 50331648 bytes
Redo Buffers 6316032 bytes

LINUX64>>CREATE CONTROLFILE REUSE SET DATABASE "USYD" RESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 100
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1
9 '/u01/oradata/USYD/ARCH_D-USYD_ID-768471617_S-212_T-1_A-600609988_00I2I4PR' SIZE 50M,
GROUP 2
10 11 '/u01/oradata/USYD/ARCH_D-USYD_ID-768471617_S-213_T-1_A-600609988_00I2I4PR' SIZE 50M,
GROUP 3
12 13 '/u01/oradata/USYD/ARCH_D-USYD_ID-768471617_S-211_T-1_A-600609988_00I2I4PR' SIZE 50M
14 DATAFILE
15 '/u01/oradata/USYD/SYSTEM01.DBF',
16 '/u01/oradata/USYD/UNDOTBS01.DBF',
17 '/u01/oradata/USYD/SYSAUX01.DBF',
18 '/u01/oradata/USYD/USERS01.DBF'
19 CHARACTER SET WE8MSWIN1252;

Control file created.

Now convert the files using the script convertscript.rman
eiger-> rman target /

Recovery Manager: Release 10.2.0.2.0 - Production on Wed Nov 22 14:32:33 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: USYD (DBID=768471617, not open)

RUN {

CONVERT DATAFILE '/u01/oradata/USYD/SYSTEM01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/oradata/USYD/NEW/SYSTEM01.DBF';


CONVERT DATAFILE '/u01/oradata/USYD/SYSAUX01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/oradata/USYD/NEW/SYSAUX01.DBF';


CONVERT DATAFILE '/u01/oradata/USYD/UNDOTBS01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/oradata/USYD/NEW/UNDOTBS01.DBF';


CONVERT DATAFILE '/u01/oradata/USYD/USERS01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/oradata/USYD/NEW/USERS01.DBF';

}


Starting backup at 22-NOV-06
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input filename=/u01/oradata/USYD/SYSTEM01.DBF
converted datafile=/u01/oradata/USYD/NEW/SYSTEM01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:46
Finished backup at 22-NOV-06


Starting backup at 22-NOV-06
channel ORA_DISK_1: sid=39 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input filename=/u01/oradata/USYD/SYSAUX01.DBF
converted datafile=/u01/oradata/USYD/NEW/SYSAUX01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:02:17
Finished backup at 22-NOV-06

Starting backup at 22-NOV-06
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input filename=/u01/oradata/USYD/UNDOTBS01.DBF
converted datafile=/u01/oradata/USYD/NEW/UNDOTBS01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:02:48
Finished backup at 22-NOV-06


Starting backup at 22-NOV-06
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input filename=/u01/oradata/USYD/USERS01.DBF
converted datafile=/u01/oradata/USYD/NEW/USERS01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:07
Finished backup at 22-NOV-06


Unfortunately it appears that the file cannot be overwritten and hence
a staging area is required because when I tried to do that

RMAN> CONVERT DATAFILE '/u01/oradata/USYD/SYSTEM01.DBF'
FROM PLATFORM 'Microsoft Windows IA (32-bit)'
FORMAT '/u01/oradata/USYD/SYSTEM01.DBF';

Starting backup at 22-NOV-06
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=36 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input filename=/u01/oradata/USYD/SYSTEM01.DBF
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 11/22/2006 14:35:00
ORA-19635: input and output filenames are identical: /u01/oradata/USYD/SYSTEM01.DBF


You then need to move the files from the staging area and resetlogs to start
the database

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

eiger-> cd USYD/N*
eiger-> ls
SYSAUX01.DBF SYSTEM01.DBF UNDOTBS01.DBF USERS01.DBF
eiger-> pwd
/u01/oradata/USYD/NEW
eiger-> mv * ../

SQL> startup mount pfile='/u01/oradata/USYD/initUSYD.ora'
ORACLE instance started.

Total System Global Area 171966464 bytes
Fixed Size 2069712 bytes
Variable Size 113249072 bytes
Database Buffers 50331648 bytes
Redo Buffers 6316032 bytes
Database mounted.

SQL> recover database using BACKUP CONTROLFILE until cancel;
ORA-00279: change 6936865 generated at 11/22/2006 10:13:25 needed for thread 1
ORA-00289: suggestion :
/u00/app/oracle/product/10.2.0/dbs/arch1_219_600609988.dbf
ORA-00280: change 6936865 for thread 1 is in sequence #219


Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
Media recovery cancelled.

SQL> alter database open resetlogs;

Database altered.

Monday, November 20, 2006

Performing Cross-Platform Database Transport

Starting with 10g R2 it is now possible to transport an entire database (not just a few tablespaces) using the convert database RMAN command.

In the below scenario we are converting from a Windows 32 bit system to a Linux x86-64 bit system.

First we check if both platforms (source and destination) have the same endianness
using the dbms_tdb package.

The database needs to be open in read only mode for the following procedure to
work.

WINDOWS>>set serveroutput on
WINDOWS>>declare
2 db_ready boolean;
3 begin
4 db_ready := dbms_tdb.check_db('Linux 64-bit for
AMD',dbms_tdb.skip_readonly);
5 end;
6 /

PL/SQL procedure successfully completed.

If they did not have the same endianness we would have got a message like the one shown below

WINDOWS>>declare
2 db_ready boolean;
3 begin
4 db_ready := dbms_tdb.check_db('Solaris[tm] OE
(64-bit)',dbms_tdb.skip_readonly);
5 end;
6 /

The specified target platform name 'Solaris[tm] OE (64-bit)' is invalid
or the target platform is not transportable.

PL/SQL procedure successfully completed.

Next we need to identify any external objects which will not be transported.
These include external tables,directories or BFILES.

WINDOWS>>declare
2 external boolean;
3 begin
4 /* value of external is ignored, but with SERVEROUTPUT set to
ON
5 * dbms_tdb.check_external displays report of external objects
6 * on console */
7 external := dbms_tdb.check_external;
8 end;
9
10 /

The following directories exist in the database:
SYS.WORK_DIR, SYS.ADMIN_DIR, SYS.DATA_PUMP_DIR

RMAN> CONVERT DATABASE NEW DATABASE 'USYD'
2> transport script 'C:\temp\transportscript'
3> to platform 'Linux 64-bit for AMD'
4> db_file_name_convert 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\USYD\'
'C:\temp\USYD\'
5> ;

In the above command C:\ORACLE\PRODUCT\10.2.0\ORADATA\USYD is the
source destination for the datafiles and C:\temp\USYD\ is the folder in which
the files will be placed after they are converted. You can open this database in read write mode once the database is converted.

Starting convert at 16/NOV/06
using channel ORA_DISK_1

Directory SYS.WORK_DIR found in the database
Directory SYS.ADMIN_DIR found in the database
Directory SYS.DATA_PUMP_DIR found in the database

User SYS with SYSDBA and SYSOPER privilege found in password file
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00001
name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\USYD\SYSTEM01.DBF

converted datafile=C:\TEMP\USYD\SYSTEM01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time:
00:00:25
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00003
name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\USYD\SYSAUX01.DBF

converted datafile=C:\TEMP\USYD\SYSAUX01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time:
00:00:27
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00002
name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\USYD\UNDOTBS01.DBF

converted datafile=C:\TEMP\USYD\UNDOTBS01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time:
00:00:25
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00004
name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\USYD\USERS01.DBF
converted datafile=C:\TEMP\USYD\USERS01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time:
00:00:08
Run SQL script C:\TEMP\TRANSPORTSCRIPT on the target platform to create
database

Edit init.ora file C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\INIT_00I2I4PR_1_0.ORA.
This pfile will be used to create the database on the target platform
To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the
target platform
To change the internal database identifier, use DBNEWID Utility
Finished backup at 16/NOV/06


Since the source is a Windows Machine and the destination Linux x86-64
bit you have to change the location of the files in the transportscript created

Create an init.ora file called initUSYD.ora with the following parameters
control_files = "/u01/oradata/USYD/cf_D-USYD_id-1778429277_00gb9u2s"
instance_name = "USYD"
plsql_native_library_dir = "/tmp/convertdb/plsqlnld1"
db_name = "USYD"
sga_max_size =5000000
compatible='10.2.0'

eiger-> sqlplus

SQL*Plus: Release 10.2.0.2.0 - Production on Mon Nov 20 17:10:28 2006

Copyright (c) 1982, 2005, Oracle. All Rights Reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup nomount pfile='/u01/oradata/USYD/initUSYD.ora'
ORACLE instance started.

Total System Global Area 171966464 bytes
Fixed Size 2069712 bytes
Variable Size 113249072 bytes
Database Buffers 50331648 bytes
Redo Buffers 6316032 bytes

Cut and paste the commands from the transportscript created.

SQL> CREATE CONTROLFILE REUSE SET DATABASE "USYD" RESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 100
MAXINSTANCES 8
5 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1
'/u01/oradata/USYD/ARCH_D-USYD_ID-768471617_S-212_T-1_A-600609988_00I2I4PR' SIZE 50M,
9 GROUP 2
'/u01/oradata/USYD/ARCH_D-USYD_ID-768471617_S-213_T-1_A-600609988_00I2I4PR' SIZE 50M,
10 GROUP 3
'/u01/oradata/USYD/ARCH_D-USYD_ID-768471617_S-211_T-1_A-600609988_00I2I4PR' SIZE 50M
11 DATAFILE
12 '/u01/oradata/USYD/SYSTEM01.DBF',
13 '/u01/oradata/USYD/UNDOTBS01.DBF',
14 '/u01/oradata/USYD/SYSAUX01.DBF',
15 '/u01/oradata/USYD/USERS01.DBF'
16 CHARACTER SET WE8MSWIN1252
17 ;

Control file created.

SQL> set numwidth 20
SQL> select distinct checkpoint_change# from v$datafile;

CHECKPOINT_CHANGE#
--------------------
6631021

SQL> select distinct checkpoint_change# from v$datafile_header;

CHECKPOINT_CHANGE#
--------------------
6631021

SQL> ALTER DATABASE OPEN RESETLOGS;

Database altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/oradata/USYD/DATA_D-USYD_I-768471617_TS-TEMP_FNO-1_00I2I4PR' SIZE 27262976 AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

Tuesday, November 14, 2006

Searching for X$ tables

You can query the names of X$ tables from X$KQFTA.

I was looking for the X$ tables related to Dynamic Remastering and found them from

SQL> select KQFTANAM from x$kqfta where KQFTANAM like 'X$KJDR%';

KQFTANAM
------------------------------
X$KJDRHV
X$KJDRPCMHV
X$KJDRPCMPF
X$KJDRMAFNSTATS
X$KJDRMHVSTATS
X$KJDRMREQ

6 rows selected.

Similary to look for dictionary tables I query from the public synonym DICT which is based on SYS.DICTIONARY.

For e.g
SQL> select table_name from dict where table_name like '%FILEST%';

TABLE_NAME
------------------------------
DBA_HIST_FILESTATXS
GV$FILESTAT
V$FILESTAT

Monday, November 13, 2006

Diagnosing unshared SQL in 10g

One of the ways to increase scalability and throughput of an application is to see to it that SQL statements are shared and reduce the number of children of sql statements. In versions prior to 10g the view V$SQL_SHARED_CURSOR would aid in diagnosing why sql statements were not being shared.In 10g there is a new event called CURSORTRACE which aids in the same.

Let us take the following simple sql statement.

SESSION1>> select count(*) from dept;

COUNT(*)
----------
4

SESSION1>> select sql_text, hash_value,address from v$sqlarea where sql_text like 'select count(*) from dept%';

SQL_TEXT HASH_VALUE ADDRESS
----------------------------------- ---------- --------
select count(*) from dept 761178024 19156098

This the first child of the above sql statement with all 'N's

SESSION1>> select * from v$sql_shared_cursor where address = '19156098';

SQL_ID ADDRESS CHILD_AD CHILD_NUMBER U S O O S L S E B P I S T A B D L T
------------- -------- -------- ------------ - - - - - - - - - - - - - - - - - -
R I I R L I O S M U T N F A I T D L D B P C S R P T M B M R O P M F L
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1jrz3ucqpx9x8 19156098 1CEFB584 0 N N N N N N N N N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N

Now let us open a new session and issue the above sql statement but see to it that the cursor is not shared

SESSION2>>alter session set optimizer_mode='RULE';

Session altered.

SESSION2>>select count(*) from dept;

COUNT(*)
----------
4

Check from Session 1 how many children have been generated
SESSION1>>select * from v$sql_shared_cursor where address = '19156098';

SQL_ID ADDRESS CHILD_AD CHILD_NUMBER U S O O S L S E B P I S T A B D L T
------------- -------- -------- ------------ - - - - - - - - - - - - - - - - - -
R I I R L I O S M U T N F A I T D L D B P C S R P T M B M R O P M F L
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1jrz3ucqpx9x8 19156098 1CEFB584 0 N N N N N N N N N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N

1jrz3ucqpx9x8 19156098 1BA90164 1 N N N N N N N N N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N N N

As expected we have one more child caused by the OPTIMIZER_MODE_MISMATCH which we
forced.

SESSION2>>alter session set events 'immediate trace name cursortrace level 612, address 761178024';

==> 761178024 is the hash value of the sql statement.

Session altered.

SESSION2>>select count(*) from dept;

COUNT(*)
----------
4

SESSION2>>alter session set optimizer_mode='FIRST_ROWS_1';

Session altered.

SESSION2>>select count(*) from dept;

COUNT(*)
----------
4

The resultant trace file generated in user_dump_dest will throw more
light into why the sql statement is not shared.

Here is a snippet from the trace file

*************************************************************************
PARSING SQLTEXT=select count(*) from dept
SQLHASH=2d5ea7a8
Checking for already pinned child.
No valid child pinned
Parent 1EA853E8(19156098) ready for search
kksSearchChildList outside while loop
kksCheckCursor: next child is #1
kksCheckCursor: pinning child #1 in shared mode 1eb62650 1ba90164
Compilation environment difference Failed sharing : 0
optimizer_mode = first_rows_1 rule
sqlstat_enabled = false true
SQL pgadep:0 pgapls:0 user
Failed sharing : Compilation environment mismatch
kksUnlockChild: releasing child
*************************************************************************

Sunday, November 05, 2006

To use DRM or not to use DRM

10g Real Application Clusters introduced a concept of resource remastering via Dynamic Resource Mastering (DRM). With DRM a resource can be re-mastered on another node in the cluster if it is found that the cache resource is accessed more frequently from that node.

In 10G R1 this was file based whereas the 10G R2 it is object based.
In 10G R1 due to a few bugs many related to high CPU usage during the DRM freeze window most customers disabled DRM by setting the following parameters
_gc_affinity_time=0
_gc_undo_affinity=FALSE

_gc_affinity_time defines the frequency in minutes to check if remastering
is needed.
_gc_affinity_limit defines the number of times a node must access an object
for it to be a DRM candidate
_gc_affinity_minimum defines the minimum number of times an object is accessed per minute before affinity kicks in

The performance problems may manifest themselves in terms of a DRM related wait event like 'gcs drm freeze in enter server mode'

In 10G R2 this feature appears to be more stable.

You can also manually remaster an object on a different node which
is different from the node on which the object is currrently mastered as shown below

SQL> select object_id,current_master, previous_master ,remaster_cnt from V$GCSPFMASTER_INFO where object_id = 144615

OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- -------------- --------------- ------------
144615 0 2 0


The object 144615 is currently mastered on node 0.
To remaster the object onto node 2 connect to node 2 as sysdba


Go to instance 2
NODE2> oradebug setmypid
Statement processed.
NODE2> oradebug lkdebug -m pkey 144615
Statement processed.

NODE2> select object_id,current_master, previous_master ,remaster_cnt from V$GCSPFMASTER_INFO where object_id = 144615

OBJECT_ID CURRENT_MASTER PREVIOUS_MASTER REMASTER_CNT
---------- -------------- --------------- ------------
144615 2 0 0

Note: In V$GCSPFMASTER_INFO you will also see resources with object ids in the 4Gb range (e.g. 4294950913)
These are for undo segments.

To dissolve remastering of this object on this instance
SQL> oradebug lkdebug -m dpkey 144615
Statement processed.

SQL> select object_id,current_master, previous_master ,remaster_cnt from V$GCSPFMASTER_INFO where object_id = 144615;
no rows selected


The remaster_cnt appears to be 0 for all objects. I have got Oracle
to log bug 5649377 on this issue.

SQL> select distinct remaster_cnt from V$GCSPFMASTER_INFO ;

REMASTER_CNT
------------
0


DRM statistics are available in X$KJDRMAFNSTATS

SQL> select * from X$KJDRMAFNSTATS
2 /

ADDR INDX INST_ID DRMS AVG_DRM_TIME OBJECTS_PER_DRM QUISCE_T FRZ_T CLEANUP_T REPLAY_T FIXWRITE_T SYNC_T
-------- ---------- ---------- ---------- ------------ --------------- ---------- ---------- ---------- ---------- ---------- ----------
RES_CLEANED REPLAY_S REPLAY_R MY_OBJECTS
----------- ---------- ---------- ----------
200089CC 0 1 32 214 1 3 14 0 0 0 99
0 2441 6952 30

The column MY_OBJECTS denotes the number of objects mastered on that node.
This should match with the following

SQL> select count(*) from V$GCSPFMASTER_INFO where current_master=0
2 /

COUNT(*)
----------
30