Archive for the ‘Database Administration’ Category

Good afternoon, Oracle Miracle Workers! It’s database migration time, in preparation for upgrades to 12.1.0.2 or maybe 12.2 on premise. No, I’m not ready to go to the cloud. Sorry, Oracle, hold your horses. So here’s how we are doing it.

  1. Prepare the Primary Database:

  • Open a sql plus session and enable FORCE LOGGING Mode:
SQL> select force_logging from v$database;
NO
SQL> alter database force logging:
  • Create Standby Redo Logs: Note: When you clone the database for standby, these redo logs will automatically be created on the standby database.
SQL> ALTER DATABASE ADD STANDBY LOGFILE ‘+RECO’ size 50M; --these should match the size of your online redo logs. I created one more than the number of my online redo logs (5).
  • Set the LOG_ARCHIVE_DEST_1 init Parameter to use the FRA as the first archive log location:
SQL> alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
  • Set the LOG_ARCHIVE_CONFIG and LOG_ARCHIE_DEST_2 init parameters on the primary db:
SQL> alter system set log_archive_config = 'dg_config=(PRIMARY_db,STANDBY_db)';

SQL> alter system set log_archive_dest_2 ='SERVICE=STANDBY_DB  NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME= STANDBY_DB ';
  • Turn on ARCHIVELOGMODE if needed. Requires a shutdown.
 SQL> select log_mode from v$database;

ARCHIVELOG

  1. Set up the tnsnames and listener entries. 

This, to me, is always the trickiest step. In my test case, I am creating a single instance standby from a 2 Node RAC Primary Database. I will make it a RAC at the end. A static listener entry is required on the standby server to accommodate the clone.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_NAME=standby_db)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1)
      (SID_NAME=standby_db)
     )
  1. Make a pfile from the primary database and copy it to the standby server:

SQL> create pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initstandby_db.ora' from spfile;

Prepare it as if you were doing a clone but reverse the LOG_ARCHIVE_CONFIG and the LOG_ARCHIVE_DEST_2 settings. Ensure the db_unique name is = to the standby database name and ensure the db_name is the same as the primary db_name.

  1. Prepare the Standby Database:

    • Copy the password file and wallet entries from the primary to the standby server
    • Create an entry in /etc/oratab using the db_unique_name (standby_db).
  2. Set your environment to standby database and startup nomount using your new pfile.

SQL> startup nomount pfile =’ /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initstdbydb.ora’;
SQL> exit
  1. RMAN Steps. 

I utilized Commvault for the Backup and duplicate steps. Takes the fun out of it but it I highly recommend it if it is at your disposal.

  • Take a full backup and use the backup to do a duplicate to standby:
run {
backup database plus archivelog;
backup current controlfile for standby;  
}
  • Optionally, run it again archivelog only.
run {
sql "Alter System Archive Log Current";
Backup ArchiveLog all ;
Backup current controlfile for standby;
}
  • Copy those files to a shared file system or a file system with the same structure.
  • Run the Duplicate to Standby:
connect target sys/pw@<primarydb>
connect auxiliary/pw@<standby> 
run {
set until sequence <seqno> thread <thrdno>
duplicate target database for standby
dorecover ;
     }
  1. Verify your standby is set up and applying logs.

    • Log in to the standby and query v$database;
  SQL> select db_unique_name ,primary_db_unique_name,database_role,switchover_status,open_mode,log_mode,force_logging from v$database;
  1. Perform a log switch on the primary database:

SQL> alter system switch logfile;

  1. Query standby to see if archived logs are being applied.

SQL> select sequence# ,first_time, next_time, applied from v$archived_log order by 1;

My first two attempts failed due to incorrect tnsnames settings. Once I got that settled, it worked famously. In the near future, I will post about setting up a Snapshot Standby and converting the DB to RAC.

 

 

 

 

 

 

 

 

Good afternoon, Oracle Olympians! Since I will be out of town for Memorial Day, I thought I would publish a little lunchtime special in lieu of a Friday post. If you are looking to hit a home run on your next Oracle RAC job interview, study these commands. These are the basic list of commands that I have used. I make it a practice to not blog about things I have never used. Here you go! Enjoy!

To run these commands, you need to be Oracle and your environment needs to be set to a database instance.

$ srvctl stop database -d dbname –shutdown normal

$ srvctl start database -d dbname –startup normal

$ srvctl stop database -d dbname -o immediate -f –shutdown immediate

$ srvctl start instance -d dbname -i db1 –start individual instance

$ srvctl status instance -d dbname -i db1 –check status of individual instance

$ srvctl remove database -d dbname –database must be shutdown

$ srvctl add database -d dbname -o /u01/app/oracle/product/11.2.0.4/dbhome_1 –add database to srvctl

$ srvctl add instance -d dbname -i db1 -n srvrname01 –instance 1, node 1 –add instance1 to the database in srvctl

$ srvctl add instance -d dbname -i db2 -n srvrname02 –instance 2, node 2 –add instance2 to the database in srvctl

$ srvctl remove instance -d dbname -i db1 –removes instance 1

$ srvctl remove instance -d dbname -i db2 –removes instance 2

$ srvctl disable instance -d dbname -i “db1,db2,db3,etc” –disables management by Oracle restart.

$ srvctl enable instance -d dbname -i “db1,db2,db3,etc” –enables management by Oracle restart.

$ srvctl config database -d dbname –lists configuration information for the specified database and lists all databases managed in the srvctl.

Thanks for reading! Please send me your questions or comments. Otherwise, have a great Memorial Day Weekend!

Jason

Please note: The following steps outline the process for cloning a RAC DB to a Single Instance (for the purpose of a test restore) version using CommVault11. I will also present the steps to do an in place restore (RAC to RAC) in my next update.

Good morning, Oracle Fans! Before we start I want to thank Tim Dexter, a fellow Oracle DBA who walked me through this process the first time I executed it. We are who we are because of the help of senior team members and mentors who took their time to train the next generation.

For those of you who read my last post about the CommVault upgrade to version 11, there were some complications. The one that was solved was how to handle SYSDBA passwords that have special characters when running RMAN Backup. Once that was solved, we needed to ensure a restore would work. How much is a backup worth if you can’t restore it? You guessed it. It is worthless.

We discovered that the same databases that were failing to backup because of special characters were also failing to restore. After some more work by the CommVault development team, they presented a solution for this as well. There is a separate set of code for the restore piece of CommVault.

On to today’s subject: Making a Test Restore of a RAC DB Using CommVault and Putty SSH Session

For those of you who think working with a GUI like CommVault makes RMAN backups and restores a snap and you can exit out of your SSH session, think again. The setup within your SSH Unix session will take quite a bit of prep work the first few times you try this.

We are going to make a clone of one of the smaller databases on the server.

  1. Copy the primary database to the clone I’ll make.
    1. Save the oratab with my clone database added.
      1. As Oracle, $ vi /etc/oratab
      2. Add a line with your restore instance name.
      3. $ cat /etc/oratab –to see if your new instance is there.
    2. Change environment to new instance $ . oraenv ?new_instance$ cd $ORACLE_HOME/dbs
    3. Make a pfile from the working instance you are going to clone.
      1. SQL> create pfile=’/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initnewinstance.ora’ from spfile;
    4. Edit initnewinstance.ora to make your cloning pfile.
      1. :set ignorecase –makes it easier for searching
      2. Then erase double underscore lines in the pfile.
      3. set cluster_database=false
      4. Set control_files so only the disk groups are set
        1. for example: *.control_files=’+DATA_XDT1′,’+RECO_XDT1′
      5. Set db_recovery_file_dest_size, pga_aggregate, sga_target to appropriate size
      6. Set *.remote_listener to appropriate location.
      7. Change use_large_pages=’ONLY’ to = ‘TRUE’ so it uses them if available but doesn’t fail if none are available.
      8. Change all string=’clone_db’ to become ‘recover_db’
        1. At the vi colon prompt like so   :%s#clonedb#recoverdb#g
      9. Save the new pfile as initrecoverdb.ora.
      10. If you have these backup settings for improved tape backup, keep them.
        1. *._backup_disk_bufcnt=64
        2. *._backup_disk_bufsz=1048576
        3. *._backup_file_bufcnt=64
        4. *._backup_file_bufsz=1048576
        5. *.backup_tape_io_slaves=TRUE.
    5. Edit vi tnsnames.ora and add new instance.

:set ignorecase

search for clonedb /clonedb

Copy the single instance “clone_db” to make recover_db then paste it back to create it.

Once edits are done save it then do a tnsping for recover_db.

That now shows network TNS protocol is ready

Now check existence of $ORACLE_BASE/admin/recover_db/adump and $ORACLE_HOME/admin/recover_db/wallet; further, that wallet has both wallet and single sign on wallet.

  1. Neither exist so create both and populate the wallet from our target database (to use RMAN terminology)

mkdir -p $ORACLE_BASE/admin/recover_db/adump

The “-p” tells mkdir to make the absolute path you supply to it

mkdir -p /u01/app/oracle/product/11.2.0.4/dbhome_1/admin/recover_db/wallet

$ cp cwallet.sso /u01/app/oracle/product/11.2.0.4/dbhome_1/admin/recover_db/wallet

$ cp ewallet.p12 /u01/app/oracle/product/11.2.0.4/dbhome_1/admin/recover_db/wallet

    1. As GRID, Add entry to listener.ora

(SID_DESC =

(GLOBAL_NAME=recoverdb)       (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1)       (SID_NAME=recoverdb)     )

lsnrctl status

lsnrctl reload

lsnrctl status

cd $ORACLE_HOME/network/admin

and edit the file as shown above.

After testing the directories for existence plus doing tnsping for the connection descriptor successfully, we’re ready to start the auxiliary database in nomount mode.

  1. Start SQL*Plus with environment set to recover_db

    Now start the database in nomount mode using the pfile we created. SQL> startup nomount pfile=’/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initrecoverydb.ora’

Exit to the Unix prompt as we’re done with Unix preparations.

Now comes the CommVault steps

Here we go…

    1. Discover the recovery instance:
      1. Navigate to Single Instance Server (xdt7–>Oracle–>All Tasks–>Discover instance
        1. Discovered Instance!! Refresh the screen to see newly discovered instance.
  • We are duplicating target database “clone_db” to auxiliary database “recover_db”.

 

  1. Let’s display the target and list its backups from its backup history view

“clone_db->View->Backup History

  1. Use the advanced option to show All operations, Include Aged Data and Include Backup Copy as NAS Snapshot Catalog Jobs.
  2. Choose okay at each subsequent screen till the Backup Job History is displayed
  3. Double click the latest successful backup and copy and record its:
    1. Job id: 554999 (**Example)
    2. Start Time:
    3. End Time:
    4. SCN Number(S): I use the lower SCN number if there are two displayed.
  4. You’re now ready to begin setting up a clone restore of a backup.
  5. Right click on the backup job # i.e. (553999)
    1. Choose “Browse and Restore”
    2. Next click “View Content” push button control at bottom of the Browse and Restore Options screen.
    3. It displays the tablespaces available in the right pane for selected database in left hand pane.
    4. Click the left hand panes checkbox control to select entire content.
    5. Then “Recover All Selected…”

The “Oracle Restore Options Server_name-clone_db SID: clone_db” screen appears.

Note that its shows Destination Client as being Server_name-clone_db, which will change after you return from the “Advance” options on this screen which we’ll do now:

  1. Select “Duplicate DB”. Now we get 12 tab controls on the Oracle Restore Options screen.
  2. Click on the Advanced push button control. Tab controls appear on the Oracle Advanced Restore Options screen.
  3. Choose the “Duplicate” tab control.
  4. Click its “Duplicate To” checkbox control, and its option become enabled.
  5. In the “Database Name” textbox control, entry the auxiliary database name (i.e. clone you’re making): recover_db.
  6. Now move to the “Recover” tab control, then active its SCN radio button and specify the SCN number we save in our preparation work:
  7. Change to the Duplicate DB Options tab control, check its “No Filename Check” checkbox control.
  8. Finally activate the “Redirect” tab control, uncheck its “Redirect” checkbox control. Remaining tab controls aren’t used for this operation, but you can look at them.

To summarize, the key tab controls were these: Duplicate, Recover, Duplicate DB Options and Redirect.

Now choose “OK” to accept

Notice that our “Destination Client” was automatically filled in after completion of the advanced push button control dialogs.

  1. Now choose “Number of streams to use for restore”, which I usually set at 4 — next accept all entries by clicking “OK” at the bottom of the “Oracle Restore Options” screen.

It immediately gives a warning that, “Data from clone_db on server_name-clone_db will be restored on to recovery_db on server_name.  All data in recovery_db will be over written.  Are you sure you want to continue?”

  1. Take heed of the waning and carefully ensure you want to do this.
  2. Let the restore begin.
  3. Once it completes, you should be able to open the database from the SQL Plus prompt and test if the restore is complete.

At this point, I took the time to delete this recovery instance. Some of the steps I took were to remove listener.ora entry (as Grid), remove its TNS descriptor from tnsnames.ora, and oratab entries. Then shutdown the database, carefully dispose of the datafiles in ASM and remove its statis listener entries.

Thanks for reading! Again, I greatly appreciate Tim Dexter for his expertise in CommVault and RMAN Duplicate.

Take care,

Jason

As far as electronics go, I am that guy who upgrades to the latest “free phone.” I am the proud owner of the I-Phone 4. I have owned it for two years and didn’t get it until Verizon offered it for free. I probably would not even own a smart phone if my employer didn’t strongly suggest I should be connected to the company email. Of course, now that I have internet on my phone, I can’t seem to function properly without it.

When it comes to the OEM product, don’t be the last one to upgrade to 12C. My last project still utilized the 11G and 10G versions of OEM. One reason they may not have upgraded was a perceived lack of support for 11G software. This is an incorrect assumption since OEM 12C was released in October 2011, prior to the release of Oracle 12 database to the general public in 2013. It also supports the use of Oracle Streams.

One of the major flaws I have noticed in OEM 11G is in tablespace management. The summary page for tablespace management displays the total size of the tablespaces and the percentage full. What it omits is the maximum space for individual datafiles or tablespaces. For instance, if you create a datafile with a size of 4GB autoextensible to 32GB, the summary page will show a 4GB datafile. If 3.8 GB of data is inserted into the tablespace, the summary page will show the datafile to be 95% full. In reality, the datafile is less than 12% full. If you click on the tablespace, it will show you all of the datafiles, their maximum size, and their allocated size. This can be a time consuming operation with OEM 11G as it tends to run slow. It also can create a false alert that might send you emails you don’t really desire. Before adding a new datafile, you need to know the maximum size of the datafile and the free space on the OS file system. OEM 12C addressed this issue and shows maximum size as well as current allocated size. It is much more accurate for tablespace usage alerts.

I searched for and created a simple query of dba_data_files to use in conjunction with the tablespace usage summary page on OEM 11G. Other useful queries can be found at https://community.oracle.com/message/1832920


select tablespace_name, sum(bytes)/1024/1024 as tbs_size, sum(maxbytes)/1024/1024 as “Max_Size” from dba_data_files
group by tablespace_name;


After awhile, I only used the dictionary tables dba_freespace, dba_segments and dba_data_files to give me information on the database. The OEM tool was just too slow for regular usage. The reason for this slowness was likely due to sharing server resources with the database itself. It is a good idea to place the OEM tool on its own server but this is not always possible due to budget constraints.

Don’t be the last DBA to switch to OEM 12C. It really is a huge improvement over all of its previous versions.

Thanks for reading!!

Jason

Greetings!!

Last week, I was asked a question in an interview. What is the difference between a Hash Join and a Nested Loop Join? I can tell you honestly that I have never set out to create either join and maybe you haven’t either. But pull up an explain plan and you are bound to see both from time to time.

So, to answer this question properly, we must needs take a look at the Oracle Query Optimizer.

The Oracle Query Optimizer

To choose an execution plan for a join, the optimizer must make these interrelated decisions:

  • Access paths

The optimizer must choose an access path to retrieve data from each table in the join statement.

  • Join method

To join each pair of row sources, Oracle must perform a join operation. Join methods include nested loop, hash, sort merge and cartesian joins.

  • Join order

To execute a statement that joins more than two tables, Oracle joins two of the tables and then joins the resulting row source to the next table. This process is continued until all tables are joined into the result.

How the Query Optimizer Chooses an Execution Plan for Joins

Before an execution plan is chosen, the query optimizer first considers the following…

  • Will the join between two or more tables result in a row source containing at most one row? The optimizer determines this based on UNIQUE and PRIMARY KEY constraints on the tables. If the answer is yes, the optimizer places these tables first in the join order. Then the optimizer will optimize the join on the remaining tables.
  • If an outer join condition exists, the table with the outer join must come after the other table in the condition in the join order. The optimizer will not consider join orders that violate this rule.

The optimizer generates a set of execution plans, according to possible join orders, join methods and available access paths. The optimizer then estimates the cost of each plan and chooses the one with the lowest cost. How is this estimation done?

  • The cost of a nested loopoperation is based on the cost of reading each selected row of the outer table and each of its matching rows of the inner table in memory. The optimizer estimates these costs using the statistics in the data dictionary. It behooves you, as the DBA, to ensure your stats are up to date!!
  •  The cost of a hash joinis based largely on the cost of building a hash table on one of the input sides to the join and using the rows from the other side of the join to probe it. (more on this later)
  • The cost of a sort merge join is based largely on the cost of reading all the sources into memory and sorting them.

There are other factors the optimizer considers such as sort area size and multi block read count size but this thread will stick to the joins.

**The optimizer’s choice of join order can be overridden with the ORDERED hint. However, if the ORDERED hint specifies a join order that violates the rul for an outer join, the optimizer will ignore the hint and choose the order. You can also override the optimizer’s choice of join method with hints.**

Nested Loop Joins

A nested loop join involves the following steps:

  1. The optimizer determines the driving table and designates it as the outer table.
  2. The other table is designated as the inner table.
  3. For every row in the outer table, Oracle accesses all the rows in the inner table. The outer loop is for every row in the outer table and the inner loop is for every row in the inner table. The outer loop appears before the inner loop in the execution plan, as follows:
    NESTED LOOPS 
      outer_loop 
      inner_loop
    

Nested loop joins are useful when small subsets of data are being joined and if the join condition is an efficient way of accessing the second table.

It is very important to ensure that the inner table is driven from (dependent on) the outer table. If the inner table’s access path is independent of the outer table, then the same rows are retrieved for every iteration of the outer loop, degrading performance considerably. In such cases, hash joins joining the two independent row sources perform better

Consider the following query:

SELECT e.first_name, e.last_name, e.salary, d.department_name
    FROM hr.employees e, hr.departments d
    WHERE d.department_name IN ('Marketing', 'Sales')
      AND e.department_id = d.department_id;

 

New Implementation for Nested Loop Joins

Oracle Database 11g Release 1 (11.1) introduces a new implementation for nested loop joins to reduce overall latency for physical I/O. When an index or a table block is not in the buffer cache and is needed to process the join, a physical I/O is required. In Oracle Database 11g Release 1 (11.1), Oracle Database can batch multiple physical I/O requests and process them using a vector I/O instead of processing them one at a time. As part of the new implementation for nested loop joins, two NESTED LOOPS join row sources might appear in the execution plan where only one would have appeared in prior releases. In such cases, Oracle Database allocates one NESTED LOOPS join row source to join the values from the table on the outer side of the join with the index on the inner side. A second row source is allocated to join the result of the first join, which includes the rowids stored in the index, with the table on the inner side of the join.

Consider the query in “Original Implementation for Nested Loop Joins”. In Oracle Database 11g Release 1 (11.1), with the new implementation for nested loop joins, the execution plan for this query might appear similar to the following execution plan:

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name              | Rows  | Bytes | Cost(%CPU)| Time      |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                   |    19 |   722 |     3   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                   |       |       |            |          |
|   2 |   NESTED LOOPS               |                   |    19 |   722 |     3   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL         | DEPARTMENTS       |     2 |    32 |     2   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | EMP_DEPARTMENT_IX |    10 |       |     0   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| EMPLOYEES         |    10 |   220 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("D"."DEPARTMENT_NAME"='Marketing' OR "D"."DEPARTMENT_NAME"='Sales')
   4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

In this case, the rows from the hr.departments table constitute the outer side of the first join. The inner side of the first join is the index emp_department_ix. The results of the first join constitute the outer side of the second join, which has the hr.employees table as its inner side.

There are cases where a second join row source is not allocated, and the execution plan looks the same as it did in prior releases. The following list describes such cases:

  • All of the columns needed from the inner side of the join are present in the index, and there is no table access required. In this case, Oracle Database allocates only one join row source.
  • The order of the rows returned might be different than it was in previous releases. Hence, when Oracle Database tries to preserve a specific ordering of the rows, for example to eliminate the need for an ORDER BY sort, Oracle Database might use the original implementation for nested loop joins.
  • The OPTIMIZER_FEATURES_ENABLE initialization parameter is set to a release before Oracle Database 11g Release 1 (11.1). In this case, Oracle Database uses the original implementation for nested loop joins.

When the Optimizer Uses Nested Loop Joins

The optimizer uses nested loop joins when joining a small number of rows, with a good driving condition between the two tables. You drive from the outer loop to the inner loop, so the order of tables in the execution plan is important.

The outer loop is the driving row source. the row source can be a table accessed sing an index scan or a full table scan. Also, the rows can be produced from any other operation. For example, the output from a nested loop join can be used as a row source for another nested loop join.

The inner loop is iterated for every row returned from the outer loop, ideally by an index scan. If the access path for the inner loop is not dependent on the outer loop, then you can end up with a Cartesian product; for every iteration of the outer loop, the inner loop produces the same set of rows. Therefore, you should use other join methods when two independent row sources are joined together.

Nested Loop Join Hints

If the optimizer is choosing to use some other join method, you can use the USE_NL(table1 table2) hint, where table1 and table2 are the aliases of the tables being joined. For some SQL examples, the data is small enough for the optimizer to prefer full table scans and use hash joins.

 

Hash Joins

Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows.

This method is best used when the smaller table fits in available memory. The cost is then limited to a single read pass over the data for the two tables.

When the Optimizer Uses Hash Joins

The optimizer uses a hash join to join two tables if they are joined using an equijoin and if either of the following conditions are true:

  • A large amount of data needs to be joined.

  • A large fraction of a small table needs to be joined.

The following example was taken (borrowed) from the Oracle Documentation on the Query Optimizer. In this example, the table ORDERS is used to build the hash table, and ORDER_ITEMS is the larger table, which is scanned later.

http://docs.oracle.com/cd/B28359_01/server.111/b28274/optimops.htm#i76073

SELECT o.customer_id, l.unit_price * l.quantity
  FROM orders o ,order_items l
 WHERE l.order_id = o.order_id;

--------------------------------------------------------------------------
| Id  | Operation            |  Name        | Rows  | Bytes | Cost (%CPU)|
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |              |   665 | 13300 |     8  (25)|
|*  1 |  HASH JOIN           |              |   665 | 13300 |     8  (25)|
|   2 |   TABLE ACCESS FULL  | ORDERS       |   105 |   840 |     4  (25)|
|   3 |   TABLE ACCESS FULL  | ORDER_ITEMS  |   665 |  7980 |     4  (25)|
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("L"."ORDER_ID"="O"."ORDER_ID")

**Apply the USE_HASH hint to instruct the optimizer to use a hash join when joining two tables together.** Use this link for hints for join operations.

http://docs.oracle.com/cd/B28359_01/server.111/b28274/hintsref.htm#CHDBAFID

This should be enough to chew on for one post. In part two, I plan to dive into Sort Merge Joins, Cartesian Joins and Outer Joins. Learning how and when to use joins is an extremely important task for tuning and maintaining your SQL as a DBA.

Questions?

Comments?

Good morning, Oracle addicts!

I have been promoting the use of OEM12C as a hands free, easy approach to configure and create Oracle Data Guard on your most important databases. Yesterday, in a “round table” interview with several other DBAs, I was called on my bluff. This individual asked me what steps I would take in order to configure both the standby and the primary database for Standby Database Creation if you couldn’t use the OEM. Without access to my notes and the latest Oracle Documentation on the subject, I was unprepared to answer the question. As I have stated before, if it can be done better with OEM12C, I am going to do it with 12C. One side affect of this philosophy is we can lose the valuable knowledge that is ascertained by doing it the old fashioned way. So I am going to answer his question on my blog, albeit a few days after the fact.

The below link is Oracle’s Documentation on the subject. It is a great read but it is very lengthy.

http://docs.oracle.com/cd/E11882_01/server.112/e41134/create_ps.htm#SBYDB00200

Oracle® Data Guard Concepts and Administration
11g Release 2 (11.2)

I am also referencing an article on Oracle-Base that is much more user friendly. If I had to use a shell script, just in case OEM12C was not up to the task, this is where I would begin.

http://www.oracle-base.com/articles/11g/data-guard-setup-11gr2.php

The below reference is how to set up Data Guard with OEM11g Grid Control. Let’s face  it folks, the future is now. Using the OEM is the preferred method if you have any kind of deadline. With OEM12C, you don’t even need a graphical, step by step guide. Just navigate to the database you want to create a standby for, then Availability, then Add a standby database. It will guide you through the process.

http://www.oracle-base.com/articles/11g/data-guard-setup-using-grid-control-11gr2.php

However, I know there is still a need to know how to do it manually, with  shell script, so I am summarizing Tim Hall with Oracle Base as well. This is not to be confused with Tim Gorman of EVDBT, whom I refer to frequently.

Requirements: Before you begin, there are a few requirements that need to be addressed.

  1. You must have two physical or virtual servers with an operating system and the Oracle Database installed. I am currently using Linux(SLES11) and Oracle 11.2.0.3.
  2. The Primary server must have a running instance.
  3. The Standby server must, at a minimum, have a software only installation.

Set up the Primary Server:

  • Logging

Ensure that the primary database is in archivelog mode. If it is in noarchivelog mode, switch to archivelog mode.

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
  • Initialization Parameters
  1. Check the settings for DB_NAME and DB_UNIQUE_NAME
SQL> show parameter db_name
NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_name 			     string	 DB_TST
SQL> show parameter db_unique_name
NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_unique_name			     string	 DB_TST
SQL>

 

2.  The DB_NAME will be the same for the primary and standby database but they must have a different DB_UNIQUE_NAME.

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB_TST,DB_TST_STBY)';

3. Set remote archive log destinations.

ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db_tst_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

4. Set LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_MAX_PROCESSES parameters.

ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' SCOPE=SPFILE;
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=30;
ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;

5. It is recommended that you make sure the primary is ready to switch rolse to become a standby.

ALTER SYSTEM SET FAL_SERVER=DB_TST_STBY;
--ALTER SYSTEM SET DB_FILE_NAME_CONVERT='DB_TST_STBY','DBTST' SCOPE=SPFILE;
--ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='DB_TST_STBY','DBTST'  SCOPE=SPFILE;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
  • Service Setup

Make entries in the tnsnames.ora files on both servers.

DBTST =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = dbtst01)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DB11G.WORLD)
    )
  )

DB11G_STBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = dbstdby01)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DBTST.WORLD)
    )
  )
  • Backup Primary Database
$ rman target=/
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
  • Create Standby Controlfile and Pfile
ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/dbtst_stby.ctl';
CREATE PFILE='/tmp/initDBTST_stby.ora' FROM SPFILE;

Amend the PFILE making entries relevant for the standby database.

*.db_unique_name='DBTST_STBY'
*.fal_server='DBTST'
*.log_archive_dest_2='SERVICE=dbtst ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DBTST'

Set up the Standby Server (Manual)

  • Copy Files
  1. Create the directories on the standby server and copy the files to the standby server.
$ mkdir -p /u01/app/oracle/oradata/DBTST
$ mkdir -p /u01/app/oracle/fast_recovery_area/DBTST
$ mkdir -p /u01/app/oracle/admin/DBTST/adump

$ # Standby controlfile to all locations.
$ scp oracle@dbtst01:/tmp/dbtst_stby.ctl /u01/app/oracle/oradata/DBTST/control01.ctl
$ cp /u01/app/oracle/oradata/DBTST/control01.ctl /u01/app/oracle/fast_recovery_area/DBTST/control02.ctl

$ # Archivelogs and backups
$ scp -r oracle@dbtst01:/u01/app/oracle/fast_recovery_area/DBTST/archivelog /u01/app/oracle/fast_recovery_area/DBTST
$ scp -r oracle@dbtst01:/u01/app/oracle/fast_recovery_area/DBTST/backupset /u01/app/oracle/fast_recovery_area/DBTST

$ # Parameter file.
$ scp oracle@dbtst01:/tmp/initDBTST_stby.ora /tmp/initDBTST_stby.ora

$ # Remote login password file.
$ scp oracle@dbtst01:$ORACLE_HOME/dbs/orapwDBTST $ORACLE_HOME/dbs
  • Start the listener on the standby server
$ lsnrctl start
  • Restore Backup by first creating an SPFILE from the amended PFILE
$ export ORACLE_SID=DBTST
$ sqlplus / as sysdba

SQL> CREATE SPFILE FROM PFILE='/tmp/initDBTST_stby.ora';

$ export ORACLE_SID=DBTST
$ rman target=/

RMAN> STARTUP MOUNT;
RMAN> RESTORE DATABASE;
  • Create Redo Logs
  1. Create online redo logs for the standby database.
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DBTST/online_redo01.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DBTST/online_redo02.log') SIZE 50M;
ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DBTST/online_redo03.log') SIZE 50M;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

2. Create standby redo logs on both the standby and the primary database to account for switchovers.

ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DBTST/standby_redo01.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DBTST/standby_redo02.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DBTST/standby_redo03.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DBTST/standby_redo04.log') SIZE 50M;

Start Apply Process

  • Start the apply process on the standby server
# Foreground redo apply. Session never returns until cancel. 
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

# Background redo apply. Control is returned to the session once the apply process is started.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
  • If you need to cancer the apply process, issue the following command.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
  • You can set a delay between the arrival of the archived redo log and it being applied on the standby server using…
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30 DISCONNECT FROM SESSION;

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION;
  • Start real-time apply using…

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

Test Log Transport

  • Check the latest archived redo log on the primary server and force a log switch
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';

SELECT sequence#, first_time, next_time
FROM   v$archived_log
ORDER BY sequence#;

ALTER SYSTEM SWITCH LOGFILE;

  • Check the new archived redo log has arrived at the standby server and been applied.

Choose Your Protection Mode

  • Maximum Protection: This mode ensures that zero data loss occurs if a primary database fails. **Because this mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used with this protection mode.**
  • Maximum Performance: This is the default mode for a standby database. This mode provides the highest lever of data protection without affecting the performance of a primary database.
  • Maximum Availability: This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database.

 Congratulations!! If you have made it this far, you have created your first standby database. As always, save your script in your filesystem for future use.

At this point it is advisable to allow OEM to discover your new target or targets. This will allow you to perform Switchover or Failover Manually. Only a SYSDBA should have the requisite privilege to perform this operation. 

To perform Switchover manually…

-- Convert primary database to standby
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

-- Shutdown primary database
SHUTDOWN IMMEDIATE;

-- Mount old primary database as standby database
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

On the original standby database…

-- Convert standby database to primary
CONNECT / AS SYSDBA
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

-- Shutdown standby database
SHUTDOWN IMMEDIATE;

-- Open old standby database as primary
STARTUP;

In the event of a failover, the primary database may not be available. To activate the standby database as a primary database…

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE STANDBY DATABASE;

Immediately backup the new primary database.

It is also recommended that you enable flashback on the primary database. In the case of a failover, the failover renders the original primary useless for converting to a standby database. If flashback is enabled, the primary can be flashed back to the time before the failover and quickly converted to a standby database.

Again, Tim Hall at Oracle-Base or the Oracle Documentation are the primary resource for this article. Having said that, I am here to help wherever possible. I blog frequently and check this blog for feedback daily. Thank you for those who challenged me to research the non-OEM way to set up Data Guard. It has made me a better DBA and, hopefully, it will make you a better DBA as well.

Thanks for reading!!!

~Jason

Greetings, Oracle friends!

In case you were desperate for my next blog post, never fear… I have returned! I had a good week off although I was called in on Thursday. But now, I am officially back!

This week, I am discussing the Oracle ODBC Gateway:

My current company obtains data from a SQL Server 2008 database from a separate company. That data is replicated over to our company on another SQL Server database. The data is then transferred to our Oracle database. It is accessed through our data warehouse and Web Applications. Up until recently, we used a package that transfers the data a thousand rows at a time over two separate connections. Needless to say, it is quite a slow process. One of the developers asked me if I had any suggestions.

My first suggestion was to scrap SQL Server and go with all Oracle.

That suggestion was met with a little bit of laughter ending abruptly when my manager shot back with…

We could always scrap Oracle.

So I took to Twitter and asked a trusted source, @dbakevlar, if she had any suggestions. She leaned over and asked @timothyjgorman. The response I got was Oracle Gateway.

Cool! Oracle Gateway! What’s that?

The rest is history. Tim and I began to look into installing this ODBC Gateway which is included in Oracle 11.2.0.3 with no licensing charge! For those of you looking for a good way to interface with SQL Server, read on!!


The first step is to install the base installation of the DG4MSQL software on the TESTserver…

After the base installation of the DG4MSQL software on the Test server first…

• Source software directory: /u03/app/oracle/stage/11.2.0/gateways
• New ORACLE_HOME: /u03/app/oracle/product/11.2.0/tg_1

The only things that needed to be changed to configure the gateway are…

• Within the RDBMS ORACLE_HOME directory (i.e. “/u01/app/Oracle/product/11.2.0/dbhome_1”) used by the databases (such as ODS_TST, etc)

o In the directory “$ORACLE_HOME/network/admin”, the file “tnsnames.ora” needs the following entry…

database_name, database_url.com =
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=url.com)(PORT=####))
(CONNECT_DATA=(SID=dg4msql))
(HS=OK)
)

• Within the gateway ORACLE_HOME (i.e. “/u03/app/oracle/product/11.2.0/tg_1”)…

o In the directory “$ORACLE_HOME/dg4msql/admin”, the file “initdg4msql.ora” needs the following entry…

HS_FDS_CONNECT_INFO=host_name:40501//DATABASENAME

Where:
• Host name is the hostname where the SQL Server instance resides
• Port# is the port on that server
• DATABASENAME is the database

o In the directory “$ORACLE_HOME/network/admin”, the file “listener.ora” needs the following entry…

SID_LIST_LISTENER_DG4MSQL =
(SID_LIST=
(SID_DESC =
(SID_NAME = dg4msql)
(ORACLE_HOME = /u03/app/oracle/product/11.2.0/tg_1)
(ENV = “LD_LIBRARY_PATH=/u03/app/oracle/product/11.2.0/tg_1/dg4msql/driver/lib:/u03/app/oracle/product/11.2.0/tg_1/lib”)
(PROGRAM = dg4msql)
)
)

Where:
• SID_NAME should be something business appropriate but we kept the generic name “dg4msql” for listing purposes.
• ORACLE_HOME is the Oracle Home directory of the gateway software
• ENV needs to be set to the directories “$ORACLE_HOME/dgrmsql/driver/lib” and “$ORACLE_HOME/lib”, but using the full pathnames, as shown
• PROGRAM needs to be “dg4msql” to indicate the gateway executable file in the “$ORACLE_HOME/bin” directory

o When changes are made to the “listener.ora”, the LISTENER_DG4MSQL listener needs to be stopped and restarted…

$ dg4msql
$ lsnrctl stop listener_dg4msql
$ lsnrctl start listener_dg4msql

…another alternative (if an alias named “dg4msql” isn’t yet set up) is…

$ export ORACLE_SID=dg4msql
$ export ORAENV_ASK=NO
$ . oraenv
$ unset ORAENV_ASK
$ lsnrctl stop listener_dg4msql
$ lsnrctl start listener_dg4msql

• For testing, get into one of the database environments and run “tnsping”…

dbtst01:/home/oracle(sid_name)> tnsping <DatabaseName>  **the name used in the “tnsnames.ora” file above

TNS Ping Utility for Linux: Version 11.2.0.3.0 – Production on 07-MAR-2014 16:13:41

Copyright (c) 1997, 2011, Oracle. All rights reserved.

Used parameter files:
/u01/app/Oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora   **notice which ORACLE_HOME is in use
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=url.com)(PORT=1527)) (CONNECT_DATA=(SID=dg4msql)) (HS=OK))
OK (0 msec)