Convert a single instance standby (dataguard) into a RAC Standby

Posted: March 1, 2017 in Uncategorized, Dataguard
Tags: , ,

Good morning, RAC and Dataguard Enthusiasts!!

I have already posted how to convert a single instance into a RAC Database. This post will focus on the subtle differences of doing this on a Dataguard Standby. When I have used Dataguard to migrate a RAC database to a new server, I created my Standby as a single instance. To make a viable Switchover, this Standby must now be converted to a RAC Standby. I must warn you up front, these are the steps I took in my 2 node RAC Exadata Server. The steps unique to a standby are in bold font. Test this in your own sandbox before you proclaim this as gospel truth.

The setup pieces can either all be done at the beginning or the end. They are ordered as I performed them so feel free to test and experiment. Some of these steps are assumed but I listed them anyways.

  • Copy/create your password file, wallet, and adump directory from the source database to both nodes. The steps work the same, regardless of the number of nodes.
  • Create oratab entry on both nodes. vi /etc/oratab
  • Add tnsnames entries on both nodes and static listener entry on Standby Node.
    • DB1 =
      (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = xdt**client01-vip)(PORT = ****))
      (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = db1)
      )
      )
    • DB2 =
      (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = xdt**client02-vip)(PORT = ****))
      (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = db2)
      )
      )
  • Shutdown Managed Apply on Dataguard Instance
    • Ensure all standby logs are applied and caught up
    • SQL> select * from v$recovery_progress; **There is a better way to check this but I don’t want to publish someone else’s work as my own. **
    • SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; 
  • Shutdown the single instance database.
    • SQL> shutdown immediate;
    • SQL> exit
  • Set environment to node 1.
    • $ . oraenv
    • ORACLE_SID = [db] ? db1
  • Edit original pfile to make it a cluster database. Remove [*.interconnects] lines to prevent an attempt to connect to the source database.
    • *.cluster_database=true
  • Startup nomount with your original pfile.
    • SQL> startup nomount pfile=’$ORACLE_HOME/dbs/inintdb.ora’
  • Create spfile from pfile. This creates a one line, pointer file in the /dbs directory.
    • SQL> create spfile=’+DATA_XDT**/DB/PARAMETERFILE/spfiledb.ora’ from pfile=’$ORACLE_HOME/dbs/inintdb.ora’;
  • Shutdown database.

Create srvctl setup
etlpc used as concrete example:
——————————-
# ETLPC
##
srvctl remove database -d ETLPCDG
srvctl add database -d ETLPCDG -c RAC \
-o /u01/app/oracle/product/11.2.0.4/dbhome_1 \
-m int.carlsonwagonlit.com \
-p +DATA_XDT*/ETLPC/PARAMETERFILE/spfileETLPCDG.ora \
-s OPEN -t IMMEDIATE -n ETLPCDG

srvctl add instance -d ETLPCdg -i ETLPCdg1 -n xdt*dbadm01
srvctl add instance -d ETLPCdg -i ETLPCdg2 -n xdt*dbadm02
–srvctl add instance -d etlpc -i etlpc3 -n xdt*dbadm03
–srvctl add instance -d etlpc -i etlpc4 -n xdt*dbadm04

srvctl enable instance -d ETLPCdg -i “ETLPCdg1,ETLPCdg2”
–srvctl disable instance -d etlpc -i “etlpc1,etlpc2,etlpc3,etlpc4”

srvctl config database -d ETLPCdg
srvctl status database -d ETLPCdg

  • Startup instance 1 in mount. This will startup mount with the spfile on ASM.
    • $ srvctl start instance -d etlpcdg -i etlpcdg1 -o mount
  • Startup Managed Apply on Node 1
    • SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
  • Once MRP0 process is running standby logs are applied, startup node 2 in mount mode:
    • $ srvctl start instance -d etlpcdg -i etlpcdg2 -o mount

I hope this clears up the differences between converting a normal database to RAC verses converting a standby database to RAC. Please let me know if you have any questions or comments.

Thanks for reading!

Jason

Advertisements
Comments
  1. aitorito says:

    This steps is assuming that primary database is a RAC… if not, you need to execute this steps too:

    *************
    PRIMARY
    *************

    SQL> alter database add logfile thread 2
    group 1 size 1G,
    group 2 size 1G,
    group 3 size 1G;

    SQL> alter database enable public thread 2;

    SQL> create undo tablespace “undotbs2” size xxxm;

    SQL> @?/rdbms/admin/catclust.sql

    *************
    STANDBY
    *************
    Prepare init.ora with this entries:

    *.cluster_database=true
    .instance_name=’instance-name’
    .instance_name=’instance-name’
    .instance_number=1
    .instance_number=2
    .thread=1
    .thread=2
    .undo_tablespace=’UNDOTBS1′
    .undo_tablespace=’UNDOTBS2′

  2. aitorito says:

    And before of theses steps you need to install crs binaries on standby 🙂

  3. Thanks, #Aitorito, for adding this. In the post, I am assuming you are already working with 2 Node RAC Servers on the primary/standby and init.ora parameters should reflect as such. I am not yet skilled in the SYS Admin, AKA Exadata Admin piece of installing RAC. I appreciate any feedback in this arena. In our organization, we ask UNIX to set up our servers for RAC.

    Thanks!
    Jason

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s