ASM Fast Rebalance and ADVM volume.

Good after noon everyone. Today I wanna share you how you can increase rebalance speed for ADVM volume if you didn’t know it yet. Since 11g Oracle has new feature by the name “ASM Fast Rebalance”. All you need to achieve this feature is remount your diskgroup which contain ADVM volume in the restricted mode.

SQL>; alter diskgroup data dismount force;

Diskgroup altered.

SQL>; alter diskgroup data mount restricted;

Diskgroup altered.

This restricted mode prevent any connection from RDBMS and cluster agents for ADVM. And it’s eliminate locks extent map during rebalance operations. In my environment with diskgroup which contain 15Tb ADVM volume, rebalance operation in the normal mount state has took 15 hours for ADVM volume plus 5 hours for database files, and 20 hours in total. When I’ve re-mounted diskgroup in the restricted mode rebalance operation has took 4 hours for ADVM volume plus 3 hours for database files, and 7 hours in total. So as you can see ASM Fast Rebalance has increased my rebalance operations in 3 times.

ORA-15196: invalid ASM block header. Continued investigation.

Hey fellas. As you know from previous article, I’ve got the error around ASM block header. Which I also tried to solve with scrubbing mechanism which has appeared in 12c Oracle. You know this mechanism is working well, but only one’s you need to know, you must have at least one type of block (primary or mirror) in correct state, and then the scrubbing mechanism can afford to save your data. But not in my situation which I’ve described in previous article.

Continue reading

Useful environment variable for asmcmd debugging


$ export DBI_TRACE=1

ASMCMD> ls -l data
<- prepare('/* ASMCMD */ select to_char(current_date, 'J') "JULIAN_DATE" from dual')= ( DBI::st=HASH(0x1ed2c48) ) [1 items] at asmcmdshare.pm line 3256
<- execute= ( '0E0' ) [1 items] at asmcmdshare.pm line 3461
<- fetchrow_hashref= ( HASH(0x1f3fa68)1keys ) [1 items] row1 at asmcmdshare.pm line 3282
<- finish= ( 1 ) [1 items] at asmcmdshare.pm line 3303
<- DESTROY(DBI::st=HASH(0xa24e00))= ( undef ) [1 items] at asmcmdbase.pm line 1130
<- prepare('/* ASMCMD */ select group_number, state from v$asm_diskgroup_stat where name='DATA'')= ( DBI::st=HASH(0x1ecc8b0) ) [1 items] at asmcmdshare.pm line 3256
<- execute= ( '0E0' ) [1 items] at asmcmdshare.pm line 3461
<- fetchrow_hashref= ( HASH(0x1ecc6b8)2keys ) [1 items] row1 at asmcmdshare.pm line 3282
<- finish= ( 1 ) [1 items] at asmcmdshare.pm line 3303
<- DESTROY(DBI::st=HASH(0x1f3f5d0))= ( undef ) [1 items] at asmcmdshare.pm line 1744
<- prepare('/* ASMCMD */ select name,
group_number,
file_number,
reference_index,
parent_index,
alias_directory,
system_created
from v$asm_alias where group_number=1 and parent_index=16777216')= ( DBI::st=HASH(0x1ecc598) ) [1 items] at asmcmdshare.pm line 3256
<- execute= ( '0E0' ) [1 items] at asmcmdshare.pm line 3461
<- fetchrow_hashref= ( HASH(0x1ec81f8)7keys ) [1 items] row1 at asmcmdshare.pm line 3282
<- fetchrow_hashref= ( undef ) [1 items] row3 at asmcmdshare.pm line 3282
<- finish= ( 1 ) [1 items] at asmcmdshare.pm line 3303
<- DESTROY(DBI::st=HASH(0x1ecc8c8))= ( undef ) [1 items] at asmcmdbase.pm line 2311
Type Redund Striped Time Sys Name
Y CLONE1H/
N CLONE1H.backup/
N test/

How to create ASM diskgroup with disks bigger than 2Tb

Another terse post. Since 12c you can use individual disk bigger than 2Tb in ASM. During creating diskgroup just use compatible attribute.


create diskgroup DATA normal redundancy failgroup FG1 disk 'ORCL:DISK[1-7]' failgroup FG2 disk 'ORCL:DISK[8-9]','ORCL:DISK1[0-4]' ATTRIBUTE 'compatible.asm' = '12.1.0.2', 'compatible.rdbms' = '12.1.0.2'

compatible.rdbms and compatible.asm – is a mandatory attributes.

How the log writer and foreground processes work together on commit.

Frits Hoogland Weblog

(warning: this is a rather detailed technical post on the internal working of the Oracle database’s commit interactions between the committing foreground processes and the log writer)

After the Trivadis Performance days I was chatting to Jonathan Lewis. I presented my Profiling the log writer and database writer presentation, in which I state the foreground (user/server) process looks at the commit SCN in order to determine if its logbuffer contents are written to disk by the logwriter(s). Jonathan suggested looking deeper into this matter, because looking at the commit SCN might not the way it truly works.

The reasoning is the foreground process flushes its log information from its private strand into the public logbuffer, and as such only needs to keep track of the log blocks it allocated in the public logbuffer to see if these are written. Different processes can allocate different blocks in the public log…

View original post 2,984 more words