I bet you know how to duplicate database from primary, or read only standby databases. But do you know how to duplicate from active database RAC environment into Single instance node without ASM or any shared file system? It’s really awkward feeling while searching this simple information, because no one mention about it. While you duplicating database from RAC to the Single, you needs to configure controlfile snapshot to the ASM or shared filesystem. If you haven’t any, you need to create some. Fortunately we’ve got very small ASM diskgroup only for ocr and voting files. So we can involve this group to our process.
rman target sys/pass@RO_standby auxiliary sys/pass@AIM_standby
RMAN> configure snapshot controlfile name to '+ocrvt/hostname';
RMAN> duplicate target database for standby from active database nofilenamecheck;
The duplicate process just using this controlfile snapshot for creating normal controlfile in location which mention in the spfile.
Today I’ve came across strange error related to srvctl add service.
$ srvctl add service -d db -s SERVICE_NAME -preferred INST1,INST2
PRCR-1006 : Failed to add resource ora.db.db_service.svc for SERVICE_NAME
PRCT-1011 : Failed to run "osdbagrp". Detailed error:
Good after noon everyone. Today I wanna share you how you can increase rebalance speed for ADVM volume if you didn’t know it yet. Since 11g Oracle has new feature by the name “ASM Fast Rebalance”. All you need to achieve this feature is remount your diskgroup which contain ADVM volume in the restricted mode.
SQL>; alter diskgroup data dismount force;
SQL>; alter diskgroup data mount restricted;
This restricted mode prevent any connection from RDBMS and cluster agents for ADVM. And it’s eliminate locks extent map during rebalance operations. In my environment with diskgroup which contain 15Tb ADVM volume, rebalance operation in the normal mount state has took 15 hours for ADVM volume plus 5 hours for database files, and 20 hours in total. When I’ve re-mounted diskgroup in the restricted mode rebalance operation has took 4 hours for ADVM volume plus 3 hours for database files, and 7 hours in total. So as you can see ASM Fast Rebalance has increased my rebalance operations in 3 times.
The new paradigm of manipulate cluster resources in 12c little bit annoying me.
crsctl start resource ora.DATA.ACFS.advm
CRS-4995: The command 'start resource' is invalid in crsctl. Use srvctl for this command.
You might wonder how CSSD, which is required to start the clustered ASM instance, can be started if voting disks are stored in ASM? This sound like a chicken-and-egg problem. Without access to the voting disks there is no CSS, hence the node can’t join the cluster. But without begin part of the cluster, CSSD can’t start the ASM instance. To solve this problem the ASM disk headers have metadata since 11.2. You can use kfed to read the headers of ASM disks containing a voting disk. The kfdhdb.vfstart and kfdhdb.vfend fields tell CSS where to find the voting file. This does not require the ASM instance to be up. Once the voting disks are located, CSS can access them and joins the cluster.
SQL> alter diskgroup data add failgroup FG01 disk 'ORCL:T01L209' rebalance power 10;
alter diskgroup data add failgroup FG01 disk 'ORCL:T01L209' rebalance power 10
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15137: cluster in rolling patch
[root@node1 ~]# /opt/oracle/product/grid/184.108.40.206/bin/crsctl query crs softwarepatch
Oracle Clusterware patch level on node node1 is .
[root@node2 ~]# /opt/oracle/product/grid/220.127.116.11/bin/crsctl query crs softwarepatch
Oracle Clusterware patch level on node node2 is .
[root@node1 ~]# /opt/oracle/product/grid/18.104.22.168/bin/clscfg -patch
clscfg: -patch mode specified
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
[root@node1 ~]# /opt/oracle/product/grid/22.214.171.124/bin/crsctl stop rollingpatch
CRS-1161: The cluster was successfully patched to patch level .
ONE MORE THING
If you’ve got more than one node, and in some reason you have different patch levels between nodes.
For example after unseccessful patching action. You need to use clscfg -localpatch for patching OLR, and ater that clscfg -patch for patching OCR.
Some days ago i’ve stumbled with interesting problem with my voting files. After i’ve restarted my cluster software on one of my node, i’ve can’t bring it back online, because the voting files has lost.