Archive for the ‘High Availability’ Category

Good afternoon, Oracle Pioneers!

A few months ago (maybe a year ago), I published an article on how to convert a single instance database into a RAC database.  Converting a Single Instance To RAC DB However, that article assumed this was already a RAC database at some prior to its migration and subsequent (RACifying) **Yes RACifying is a word I made up or stole from someone**

If this database is and has always been a single instance, there are likely a few missing parameters that are needed in order to make it into a multiple instance database.

So here is my updated version of how to convert a single instance database into a RAC Database using command line and srvctl. These steps will work for 11g or 12c databases.

After single instance is available do this:
If this is a single instance you are migrating and “racifying”, you must create undotbs2 and logfiles for thread 2 to match number and size of thread 1.

  1. SQL> create undo tablespace undotbs2 datafile ‘+DATA’ size 4G;
  2. SQL> alter database add logfile thread 2 group 5 size 4096M;   
  3. SQL> alter database enable public thread 2; 
  4. Verify these parameters are set in your pfile:

control_files **Update location info from +data and +reco Location         

db_name1.thread=1                                                                                                                                 db_name2.thread=2                                                                                                           db_name1.undo_tablespace=’UNDOTBS1′                                                                 db_name2.undo_tablespace=’UNDOTBS2′                                                                   db_name1.local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=xdt*client01-vip)(PORT=9653))’ db_name2.local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=xdt*client02-vip)(PORT=9653))’ db_name1.instance_number=1 db_name2.instance_number=2 *.remote_listener=’xdt*-scan:9653’5.

5. edit $ORACLE_HOME/dbs/init<db_name>.ora to a one line file reading location of new spfile:

spfile=’$ORACLE_HOME/dbs/init<db_name>.ora’
6. copy file to $ORACLE_HOME/dbs/init<db_name><inst#>.ora on all nodes

7.   shutdown immediate;

8. Create srvctl setup. See my previous article on converting a single instance to a RAC DB.

9. Log into instance and set cluster database to true:

srvctl start instance -d db_name -i db_name1

SQL> alter system set cluster_database=true scope=spfile sid=’*’;

10. Shutdown and restart database with srvctl:

srvctl stop database -d db_name -o immediate -f

srvctl start database -d db_name

Good luck and let me know if you have any issues! Keep on RACifying out there!

Good afternoon, Oracle Miracle Workers! It’s database migration time, in preparation for upgrades to 12.1.0.2 or maybe 12.2 on premise. No, I’m not ready to go to the cloud. Sorry, Oracle, hold your horses. So here’s how we are doing it.

  1. Prepare the Primary Database:

  • Open a sql plus session and enable FORCE LOGGING Mode:
SQL> select force_logging from v$database;
NO
SQL> alter database force logging:
  • Create Standby Redo Logs: Note: When you clone the database for standby, these redo logs will automatically be created on the standby database.
SQL> ALTER DATABASE ADD STANDBY LOGFILE ‘+RECO’ size 50M; --these should match the size of your online redo logs. I created one more than the number of my online redo logs (5).
  • Set the LOG_ARCHIVE_DEST_1 init Parameter to use the FRA as the first archive log location:
SQL> alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
  • Set the LOG_ARCHIVE_CONFIG and LOG_ARCHIE_DEST_2 init parameters on the primary db:
SQL> alter system set log_archive_config = 'dg_config=(PRIMARY_db,STANDBY_db)';

SQL> alter system set log_archive_dest_2 ='SERVICE=STANDBY_DB  NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME= STANDBY_DB ';
  • Turn on ARCHIVELOGMODE if needed. Requires a shutdown.
 SQL> select log_mode from v$database;

ARCHIVELOG

  1. Set up the tnsnames and listener entries. 

This, to me, is always the trickiest step. In my test case, I am creating a single instance standby from a 2 Node RAC Primary Database. I will make it a RAC at the end. A static listener entry is required on the standby server to accommodate the clone.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_NAME=standby_db)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1)
      (SID_NAME=standby_db)
     )
  1. Make a pfile from the primary database and copy it to the standby server:

SQL> create pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initstandby_db.ora' from spfile;

Prepare it as if you were doing a clone but reverse the LOG_ARCHIVE_CONFIG and the LOG_ARCHIVE_DEST_2 settings. Ensure the db_unique name is = to the standby database name and ensure the db_name is the same as the primary db_name.

  1. Prepare the Standby Database:

    • Copy the password file and wallet entries from the primary to the standby server
    • Create an entry in /etc/oratab using the db_unique_name (standby_db).
  2. Set your environment to standby database and startup nomount using your new pfile.

SQL> startup nomount pfile =’ /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initstdbydb.ora’;
SQL> exit
  1. RMAN Steps. 

I utilized Commvault for the Backup and duplicate steps. Takes the fun out of it but it I highly recommend it if it is at your disposal.

  • Take a full backup and use the backup to do a duplicate to standby:
run {
backup database plus archivelog;
backup current controlfile for standby;  
}
  • Optionally, run it again archivelog only.
run {
sql "Alter System Archive Log Current";
Backup ArchiveLog all ;
Backup current controlfile for standby;
}
  • Copy those files to a shared file system or a file system with the same structure.
  • Run the Duplicate to Standby:
connect target sys/pw@<primarydb>
connect auxiliary/pw@<standby> 
run {
set until sequence <seqno> thread <thrdno>
duplicate target database for standby
dorecover ;
     }
  1. Verify your standby is set up and applying logs.

    • Log in to the standby and query v$database;
  SQL> select db_unique_name ,primary_db_unique_name,database_role,switchover_status,open_mode,log_mode,force_logging from v$database;
  1. Perform a log switch on the primary database:

SQL> alter system switch logfile;

  1. Query standby to see if archived logs are being applied.

SQL> select sequence# ,first_time, next_time, applied from v$archived_log order by 1;

My first two attempts failed due to incorrect tnsnames settings. Once I got that settled, it worked famously. In the near future, I will post about setting up a Snapshot Standby and converting the DB to RAC.

 

 

 

 

 

 

 

 

Good afternoon, Oracle Olympians! Since I will be out of town for Memorial Day, I thought I would publish a little lunchtime special in lieu of a Friday post. If you are looking to hit a home run on your next Oracle RAC job interview, study these commands. These are the basic list of commands that I have used. I make it a practice to not blog about things I have never used. Here you go! Enjoy!

To run these commands, you need to be Oracle and your environment needs to be set to a database instance.

$ srvctl stop database -d dbname –shutdown normal

$ srvctl start database -d dbname –startup normal

$ srvctl stop database -d dbname -o immediate -f –shutdown immediate

$ srvctl start instance -d dbname -i db1 –start individual instance

$ srvctl status instance -d dbname -i db1 –check status of individual instance

$ srvctl remove database -d dbname –database must be shutdown

$ srvctl add database -d dbname -o /u01/app/oracle/product/11.2.0.4/dbhome_1 –add database to srvctl

$ srvctl add instance -d dbname -i db1 -n srvrname01 –instance 1, node 1 –add instance1 to the database in srvctl

$ srvctl add instance -d dbname -i db2 -n srvrname02 –instance 2, node 2 –add instance2 to the database in srvctl

$ srvctl remove instance -d dbname -i db1 –removes instance 1

$ srvctl remove instance -d dbname -i db2 –removes instance 2

$ srvctl disable instance -d dbname -i “db1,db2,db3,etc” –disables management by Oracle restart.

$ srvctl enable instance -d dbname -i “db1,db2,db3,etc” –enables management by Oracle restart.

$ srvctl config database -d dbname –lists configuration information for the specified database and lists all databases managed in the srvctl.

Thanks for reading! Please send me your questions or comments. Otherwise, have a great Memorial Day Weekend!

Jason

Please note: The following steps outline the process for cloning a RAC DB to a Single Instance (for the purpose of a test restore) version using CommVault11. I will also present the steps to do an in place restore (RAC to RAC) in my next update.

Good morning, Oracle Fans! Before we start I want to thank Tim Dexter, a fellow Oracle DBA who walked me through this process the first time I executed it. We are who we are because of the help of senior team members and mentors who took their time to train the next generation.

For those of you who read my last post about the CommVault upgrade to version 11, there were some complications. The one that was solved was how to handle SYSDBA passwords that have special characters when running RMAN Backup. Once that was solved, we needed to ensure a restore would work. How much is a backup worth if you can’t restore it? You guessed it. It is worthless.

We discovered that the same databases that were failing to backup because of special characters were also failing to restore. After some more work by the CommVault development team, they presented a solution for this as well. There is a separate set of code for the restore piece of CommVault.

On to today’s subject: Making a Test Restore of a RAC DB Using CommVault and Putty SSH Session

For those of you who think working with a GUI like CommVault makes RMAN backups and restores a snap and you can exit out of your SSH session, think again. The setup within your SSH Unix session will take quite a bit of prep work the first few times you try this.

We are going to make a clone of one of the smaller databases on the server.

  1. Copy the primary database to the clone I’ll make.
    1. Save the oratab with my clone database added.
      1. As Oracle, $ vi /etc/oratab
      2. Add a line with your restore instance name.
      3. $ cat /etc/oratab –to see if your new instance is there.
    2. Change environment to new instance $ . oraenv ?new_instance$ cd $ORACLE_HOME/dbs
    3. Make a pfile from the working instance you are going to clone.
      1. SQL> create pfile=’/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initnewinstance.ora’ from spfile;
    4. Edit initnewinstance.ora to make your cloning pfile.
      1. :set ignorecase –makes it easier for searching
      2. Then erase double underscore lines in the pfile.
      3. set cluster_database=false
      4. Set control_files so only the disk groups are set
        1. for example: *.control_files=’+DATA_XDT1′,’+RECO_XDT1′
      5. Set db_recovery_file_dest_size, pga_aggregate, sga_target to appropriate size
      6. Set *.remote_listener to appropriate location.
      7. Change use_large_pages=’ONLY’ to = ‘TRUE’ so it uses them if available but doesn’t fail if none are available.
      8. Change all string=’clone_db’ to become ‘recover_db’
        1. At the vi colon prompt like so   :%s#clonedb#recoverdb#g
      9. Save the new pfile as initrecoverdb.ora.
      10. If you have these backup settings for improved tape backup, keep them.
        1. *._backup_disk_bufcnt=64
        2. *._backup_disk_bufsz=1048576
        3. *._backup_file_bufcnt=64
        4. *._backup_file_bufsz=1048576
        5. *.backup_tape_io_slaves=TRUE.
    5. Edit vi tnsnames.ora and add new instance.

:set ignorecase

search for clonedb /clonedb

Copy the single instance “clone_db” to make recover_db then paste it back to create it.

Once edits are done save it then do a tnsping for recover_db.

That now shows network TNS protocol is ready

Now check existence of $ORACLE_BASE/admin/recover_db/adump and $ORACLE_HOME/admin/recover_db/wallet; further, that wallet has both wallet and single sign on wallet.

  1. Neither exist so create both and populate the wallet from our target database (to use RMAN terminology)

mkdir -p $ORACLE_BASE/admin/recover_db/adump

The “-p” tells mkdir to make the absolute path you supply to it

mkdir -p /u01/app/oracle/product/11.2.0.4/dbhome_1/admin/recover_db/wallet

$ cp cwallet.sso /u01/app/oracle/product/11.2.0.4/dbhome_1/admin/recover_db/wallet

$ cp ewallet.p12 /u01/app/oracle/product/11.2.0.4/dbhome_1/admin/recover_db/wallet

    1. As GRID, Add entry to listener.ora

(SID_DESC =

(GLOBAL_NAME=recoverdb)       (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1)       (SID_NAME=recoverdb)     )

lsnrctl status

lsnrctl reload

lsnrctl status

cd $ORACLE_HOME/network/admin

and edit the file as shown above.

After testing the directories for existence plus doing tnsping for the connection descriptor successfully, we’re ready to start the auxiliary database in nomount mode.

  1. Start SQL*Plus with environment set to recover_db

    Now start the database in nomount mode using the pfile we created. SQL> startup nomount pfile=’/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initrecoverydb.ora’

Exit to the Unix prompt as we’re done with Unix preparations.

Now comes the CommVault steps

Here we go…

    1. Discover the recovery instance:
      1. Navigate to Single Instance Server (xdt7–>Oracle–>All Tasks–>Discover instance
        1. Discovered Instance!! Refresh the screen to see newly discovered instance.
  • We are duplicating target database “clone_db” to auxiliary database “recover_db”.

 

  1. Let’s display the target and list its backups from its backup history view

“clone_db->View->Backup History

  1. Use the advanced option to show All operations, Include Aged Data and Include Backup Copy as NAS Snapshot Catalog Jobs.
  2. Choose okay at each subsequent screen till the Backup Job History is displayed
  3. Double click the latest successful backup and copy and record its:
    1. Job id: 554999 (**Example)
    2. Start Time:
    3. End Time:
    4. SCN Number(S): I use the lower SCN number if there are two displayed.
  4. You’re now ready to begin setting up a clone restore of a backup.
  5. Right click on the backup job # i.e. (553999)
    1. Choose “Browse and Restore”
    2. Next click “View Content” push button control at bottom of the Browse and Restore Options screen.
    3. It displays the tablespaces available in the right pane for selected database in left hand pane.
    4. Click the left hand panes checkbox control to select entire content.
    5. Then “Recover All Selected…”

The “Oracle Restore Options Server_name-clone_db SID: clone_db” screen appears.

Note that its shows Destination Client as being Server_name-clone_db, which will change after you return from the “Advance” options on this screen which we’ll do now:

  1. Select “Duplicate DB”. Now we get 12 tab controls on the Oracle Restore Options screen.
  2. Click on the Advanced push button control. Tab controls appear on the Oracle Advanced Restore Options screen.
  3. Choose the “Duplicate” tab control.
  4. Click its “Duplicate To” checkbox control, and its option become enabled.
  5. In the “Database Name” textbox control, entry the auxiliary database name (i.e. clone you’re making): recover_db.
  6. Now move to the “Recover” tab control, then active its SCN radio button and specify the SCN number we save in our preparation work:
  7. Change to the Duplicate DB Options tab control, check its “No Filename Check” checkbox control.
  8. Finally activate the “Redirect” tab control, uncheck its “Redirect” checkbox control. Remaining tab controls aren’t used for this operation, but you can look at them.

To summarize, the key tab controls were these: Duplicate, Recover, Duplicate DB Options and Redirect.

Now choose “OK” to accept

Notice that our “Destination Client” was automatically filled in after completion of the advanced push button control dialogs.

  1. Now choose “Number of streams to use for restore”, which I usually set at 4 — next accept all entries by clicking “OK” at the bottom of the “Oracle Restore Options” screen.

It immediately gives a warning that, “Data from clone_db on server_name-clone_db will be restored on to recovery_db on server_name.  All data in recovery_db will be over written.  Are you sure you want to continue?”

  1. Take heed of the waning and carefully ensure you want to do this.
  2. Let the restore begin.
  3. Once it completes, you should be able to open the database from the SQL Plus prompt and test if the restore is complete.

At this point, I took the time to delete this recovery instance. Some of the steps I took were to remove listener.ora entry (as Grid), remove its TNS descriptor from tnsnames.ora, and oratab entries. Then shutdown the database, carefully dispose of the datafiles in ASM and remove its statis listener entries.

Thanks for reading! Again, I greatly appreciate Tim Dexter for his expertise in CommVault and RMAN Duplicate.

Take care,

Jason

Good morning, Oracle addicts!

I have been promoting the use of OEM12C as a hands free, easy approach to configure and create Oracle Data Guard on your most important databases. Yesterday, in a “round table” interview with several other DBAs, I was called on my bluff. This individual asked me what steps I would take in order to configure both the standby and the primary database for Standby Database Creation if you couldn’t use the OEM. Without access to my notes and the latest Oracle Documentation on the subject, I was unprepared to answer the question. As I have stated before, if it can be done better with OEM12C, I am going to do it with 12C. One side affect of this philosophy is we can lose the valuable knowledge that is ascertained by doing it the old fashioned way. So I am going to answer his question on my blog, albeit a few days after the fact.

The below link is Oracle’s Documentation on the subject. It is a great read but it is very lengthy.

http://docs.oracle.com/cd/E11882_01/server.112/e41134/create_ps.htm#SBYDB00200

Oracle® Data Guard Concepts and Administration
11g Release 2 (11.2)

I am also referencing an article on Oracle-Base that is much more user friendly. If I had to use a shell script, just in case OEM12C was not up to the task, this is where I would begin.

http://www.oracle-base.com/articles/11g/data-guard-setup-11gr2.php

The below reference is how to set up Data Guard with OEM11g Grid Control. Let’s face  it folks, the future is now. Using the OEM is the preferred method if you have any kind of deadline. With OEM12C, you don’t even need a graphical, step by step guide. Just navigate to the database you want to create a standby for, then Availability, then Add a standby database. It will guide you through the process.

http://www.oracle-base.com/articles/11g/data-guard-setup-using-grid-control-11gr2.php

However, I know there is still a need to know how to do it manually, with  shell script, so I am summarizing Tim Hall with Oracle Base as well. This is not to be confused with Tim Gorman of EVDBT, whom I refer to frequently.

Requirements: Before you begin, there are a few requirements that need to be addressed.

  1. You must have two physical or virtual servers with an operating system and the Oracle Database installed. I am currently using Linux(SLES11) and Oracle 11.2.0.3.
  2. The Primary server must have a running instance.
  3. The Standby server must, at a minimum, have a software only installation.

Set up the Primary Server:

  • Logging

Ensure that the primary database is in archivelog mode. If it is in noarchivelog mode, switch to archivelog mode.

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
  • Initialization Parameters
  1. Check the settings for DB_NAME and DB_UNIQUE_NAME
SQL> show parameter db_name
NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_name 			     string	 DB_TST
SQL> show parameter db_unique_name
NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_unique_name			     string	 DB_TST
SQL>

 

2.  The DB_NAME will be the same for the primary and standby database but they must have a different DB_UNIQUE_NAME.

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB_TST,DB_TST_STBY)';

3. Set remote archive log destinations.

ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db_tst_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

4. Set LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_MAX_PROCESSES parameters.

ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' SCOPE=SPFILE;
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=30;
ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;

5. It is recommended that you make sure the primary is ready to switch rolse to become a standby.

ALTER SYSTEM SET FAL_SERVER=DB_TST_STBY;
--ALTER SYSTEM SET DB_FILE_NAME_CONVERT='DB_TST_STBY','DBTST' SCOPE=SPFILE;
--ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='DB_TST_STBY','DBTST'  SCOPE=SPFILE;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
  • Service Setup

Make entries in the tnsnames.ora files on both servers.

DBTST =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = dbtst01)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DB11G.WORLD)
    )
  )

DB11G_STBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = dbstdby01)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DBTST.WORLD)
    )
  )
  • Backup Primary Database
$ rman target=/
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
  • Create Standby Controlfile and Pfile
ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/dbtst_stby.ctl';
CREATE PFILE='/tmp/initDBTST_stby.ora' FROM SPFILE;

Amend the PFILE making entries relevant for the standby database.

*.db_unique_name='DBTST_STBY'
*.fal_server='DBTST'
*.log_archive_dest_2='SERVICE=dbtst ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DBTST'

Set up the Standby Server (Manual)

  • Copy Files
  1. Create the directories on the standby server and copy the files to the standby server.
$ mkdir -p /u01/app/oracle/oradata/DBTST
$ mkdir -p /u01/app/oracle/fast_recovery_area/DBTST
$ mkdir -p /u01/app/oracle/admin/DBTST/adump

$ # Standby controlfile to all locations.
$ scp oracle@dbtst01:/tmp/dbtst_stby.ctl /u01/app/oracle/oradata/DBTST/control01.ctl
$ cp /u01/app/oracle/oradata/DBTST/control01.ctl /u01/app/oracle/fast_recovery_area/DBTST/control02.ctl

$ # Archivelogs and backups
$ scp -r oracle@dbtst01:/u01/app/oracle/fast_recovery_area/DBTST/archivelog /u01/app/oracle/fast_recovery_area/DBTST
$ scp -r oracle@dbtst01:/u01/app/oracle/fast_recovery_area/DBTST/backupset /u01/app/oracle/fast_recovery_area/DBTST

$ # Parameter file.
$ scp oracle@dbtst01:/tmp/initDBTST_stby.ora /tmp/initDBTST_stby.ora

$ # Remote login password file.
$ scp oracle@dbtst01:$ORACLE_HOME/dbs/orapwDBTST $ORACLE_HOME/dbs
  • Start the listener on the standby server
$ lsnrctl start
  • Restore Backup by first creating an SPFILE from the amended PFILE
$ export ORACLE_SID=DBTST
$ sqlplus / as sysdba

SQL> CREATE SPFILE FROM PFILE='/tmp/initDBTST_stby.ora';

$ export ORACLE_SID=DBTST
$ rman target=/

RMAN> STARTUP MOUNT;
RMAN> RESTORE DATABASE;
  • Create Redo Logs
  1. Create online redo logs for the standby database.
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DBTST/online_redo01.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DBTST/online_redo02.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DBTST/online_redo03.log') SIZE 50M;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

2. Create standby redo logs on both the standby and the primary database to account for switchovers.

ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DBTST/standby_redo01.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DBTST/standby_redo02.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DBTST/standby_redo03.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DBTST/standby_redo04.log') SIZE 50M;

Start Apply Process

  • Start the apply process on the standby server
# Foreground redo apply. Session never returns until cancel. 
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

# Background redo apply. Control is returned to the session once the apply process is started.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
  • If you need to cancer the apply process, issue the following command.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
  • You can set a delay between the arrival of the archived redo log and it being applied on the standby server using…
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30 DISCONNECT FROM SESSION;

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION;
  • Start real-time apply using…

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

Test Log Transport

  • Check the latest archived redo log on the primary server and force a log switch
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';

SELECT sequence#, first_time, next_time
FROM   v$archived_log
ORDER BY sequence#;

ALTER SYSTEM SWITCH LOGFILE;

  • Check the new archived redo log has arrived at the standby server and been applied.

Choose Your Protection Mode

  • Maximum Protection: This mode ensures that zero data loss occurs if a primary database fails. **Because this mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used with this protection mode.**
  • Maximum Performance: This is the default mode for a standby database. This mode provides the highest lever of data protection without affecting the performance of a primary database.
  • Maximum Availability: This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database.

 Congratulations!! If you have made it this far, you have created your first standby database. As always, save your script in your filesystem for future use.

At this point it is advisable to allow OEM to discover your new target or targets. This will allow you to perform Switchover or Failover Manually. Only a SYSDBA should have the requisite privilege to perform this operation. 

To perform Switchover manually…

-- Convert primary database to standby
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

-- Shutdown primary database
SHUTDOWN IMMEDIATE;

-- Mount old primary database as standby database
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

On the original standby database…

-- Convert standby database to primary
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

-- Shutdown standby database
SHUTDOWN IMMEDIATE;

-- Open old standby database as primary
STARTUP;

In the event of a failover, the primary database may not be available. To activate the standby database as a primary database…

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE STANDBY DATABASE;

Immediately backup the new primary database.

It is also recommended that you enable flashback on the primary database. In the case of a failover, the failover renders the original primary useless for converting to a standby database. If flashback is enabled, the primary can be flashed back to the time before the failover and quickly converted to a standby database.

Again, Tim Hall at Oracle-Base or the Oracle Documentation are the primary resource for this article. Having said that, I am here to help wherever possible. I blog frequently and check this blog for feedback daily. Thank you for those who challenged me to research the non-OEM way to set up Data Guard. It has made me a better DBA and, hopefully, it will make you a better DBA as well.

Thanks for reading!!!

~Jason