Posts Tagged ‘Dataguard’

Good morning, RAC and Dataguard Enthusiasts!!

I have already posted how to convert a single instance into a RAC Database. This post will focus on the subtle differences of doing this on a Dataguard Standby. When I have used Dataguard to migrate a RAC database to a new server, I created my Standby as a single instance. To make a viable Switchover, this Standby must now be converted to a RAC Standby. I must warn you up front, these are the steps I took in my 2 node RAC Exadata Server. The steps unique to a standby are in bold font. Test this in your own sandbox before you proclaim this as gospel truth.

The setup pieces can either all be done at the beginning or the end. They are ordered as I performed them so feel free to test and experiment. Some of these steps are assumed but I listed them anyways.

  • Copy/create your password file, wallet, and adump directory from the source database to both nodes. The steps work the same, regardless of the number of nodes.
  • Create oratab entry on both nodes. vi /etc/oratab
  • Add tnsnames entries on both nodes and static listener entry on Standby Node.
    • DB1 =
      (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = xdt**client01-vip)(PORT = ****))
      (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = db1)
      )
      )
    • DB2 =
      (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = xdt**client02-vip)(PORT = ****))
      (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = db2)
      )
      )
  • Shutdown Managed Apply on Dataguard Instance
    • Ensure all standby logs are applied and caught up
    • SQL> select * from v$recovery_progress; **There is a better way to check this but I don’t want to publish someone else’s work as my own. **
    • SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; 
  • Shutdown the single instance database.
    • SQL> shutdown immediate;
    • SQL> exit
  • Set environment to node 1.
    • $ . oraenv
    • ORACLE_SID = [db] ? db1
  • Edit original pfile to make it a cluster database. Remove [*.interconnects] lines to prevent an attempt to connect to the source database.
    • *.cluster_database=true
  • Startup nomount with your original pfile.
    • SQL> startup nomount pfile=’$ORACLE_HOME/dbs/inintdb.ora’
  • Create spfile from pfile. This creates a one line, pointer file in the /dbs directory.
    • SQL> create spfile=’+DATA_XDT**/DB/PARAMETERFILE/spfiledb.ora’ from pfile=’$ORACLE_HOME/dbs/inintdb.ora’;
  • Shutdown database.

Create srvctl setup
etlpc used as concrete example:
——————————-
# ETLPC
##
srvctl remove database -d ETLPCDG
srvctl add database -d ETLPCDG -c RAC \
-o /u01/app/oracle/product/11.2.0.4/dbhome_1 \
-m int.carlsonwagonlit.com \
-p +DATA_XDT*/ETLPC/PARAMETERFILE/spfileETLPCDG.ora \
-s OPEN -t IMMEDIATE -n ETLPCDG

srvctl add instance -d ETLPCdg -i ETLPCdg1 -n xdt*dbadm01
srvctl add instance -d ETLPCdg -i ETLPCdg2 -n xdt*dbadm02
–srvctl add instance -d etlpc -i etlpc3 -n xdt*dbadm03
–srvctl add instance -d etlpc -i etlpc4 -n xdt*dbadm04

srvctl enable instance -d ETLPCdg -i “ETLPCdg1,ETLPCdg2”
–srvctl disable instance -d etlpc -i “etlpc1,etlpc2,etlpc3,etlpc4”

srvctl config database -d ETLPCdg
srvctl status database -d ETLPCdg

  • Startup instance 1 in mount. This will startup mount with the spfile on ASM.
    • $ srvctl start instance -d etlpcdg -i etlpcdg1 -o mount
  • Startup Managed Apply on Node 1
    • SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
  • Once MRP0 process is running standby logs are applied, startup node 2 in mount mode:
    • $ srvctl start instance -d etlpcdg -i etlpcdg2 -o mount

I hope this clears up the differences between converting a normal database to RAC verses converting a standby database to RAC. Please let me know if you have any questions or comments.

Thanks for reading!

Jason

Advertisements

Good afternoon, Oracle Miracle Workers! It’s database migration time, in preparation for upgrades to 12.1.0.2 or maybe 12.2 on premise. No, I’m not ready to go to the cloud. Sorry, Oracle, hold your horses. So here’s how we are doing it.

  1. Prepare the Primary Database:

  • Open a sql plus session and enable FORCE LOGGING Mode:
SQL> select force_logging from v$database;
NO
SQL> alter database force logging:
  • Create Standby Redo Logs: Note: When you clone the database for standby, these redo logs will automatically be created on the standby database.
SQL> ALTER DATABASE ADD STANDBY LOGFILE ‘+RECO’ size 50M; --these should match the size of your online redo logs. I created one more than the number of my online redo logs (5).
  • Set the LOG_ARCHIVE_DEST_1 init Parameter to use the FRA as the first archive log location:
SQL> alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
  • Set the LOG_ARCHIVE_CONFIG and LOG_ARCHIE_DEST_2 init parameters on the primary db:
SQL> alter system set log_archive_config = 'dg_config=(PRIMARY_db,STANDBY_db)';

SQL> alter system set log_archive_dest_2 ='SERVICE=STANDBY_DB  NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME= STANDBY_DB ';
  • Turn on ARCHIVELOGMODE if needed. Requires a shutdown.
 SQL> select log_mode from v$database;

ARCHIVELOG

  1. Set up the tnsnames and listener entries. 

This, to me, is always the trickiest step. In my test case, I am creating a single instance standby from a 2 Node RAC Primary Database. I will make it a RAC at the end. A static listener entry is required on the standby server to accommodate the clone.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_NAME=standby_db)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1)
      (SID_NAME=standby_db)
     )
  1. Make a pfile from the primary database and copy it to the standby server:

SQL> create pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initstandby_db.ora' from spfile;

Prepare it as if you were doing a clone but reverse the LOG_ARCHIVE_CONFIG and the LOG_ARCHIVE_DEST_2 settings. Ensure the db_unique name is = to the standby database name and ensure the db_name is the same as the primary db_name.

  1. Prepare the Standby Database:

    • Copy the password file and wallet entries from the primary to the standby server
    • Create an entry in /etc/oratab using the db_unique_name (standby_db).
  2. Set your environment to standby database and startup nomount using your new pfile.

SQL> startup nomount pfile =’ /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initstdbydb.ora’;
SQL> exit
  1. RMAN Steps. 

I utilized Commvault for the Backup and duplicate steps. Takes the fun out of it but it I highly recommend it if it is at your disposal.

  • Take a full backup and use the backup to do a duplicate to standby:
run {
backup database plus archivelog;
backup current controlfile for standby;  
}
  • Optionally, run it again archivelog only.
run {
sql "Alter System Archive Log Current";
Backup ArchiveLog all ;
Backup current controlfile for standby;
}
  • Copy those files to a shared file system or a file system with the same structure.
  • Run the Duplicate to Standby:
connect target sys/pw@<primarydb>
connect auxiliary/pw@<standby> 
run {
set until sequence <seqno> thread <thrdno>
duplicate target database for standby
dorecover ;
     }
  1. Verify your standby is set up and applying logs.

    • Log in to the standby and query v$database;
  SQL> select db_unique_name ,primary_db_unique_name,database_role,switchover_status,open_mode,log_mode,force_logging from v$database;
  1. Perform a log switch on the primary database:

SQL> alter system switch logfile;

  1. Query standby to see if archived logs are being applied.

SQL> select sequence# ,first_time, next_time, applied from v$archived_log order by 1;

My first two attempts failed due to incorrect tnsnames settings. Once I got that settled, it worked famously. In the near future, I will post about setting up a Snapshot Standby and converting the DB to RAC.