Good afternoon, Oracle Pioneers!

A few months ago (maybe a year ago), I published an article on how to convert a single instance database into a RAC database.  Converting a Single Instance To RAC DB However, that article assumed this was already a RAC database at some prior to its migration and subsequent (RACifying) **Yes RACifying is a word I made up or stole from someone**

If this database is and has always been a single instance, there are likely a few missing parameters that are needed in order to make it into a multiple instance database.

So here is my updated version of how to convert a single instance database into a RAC Database using command line and srvctl. These steps will work for 11g or 12c databases.

After single instance is available do this:
If this is a single instance you are migrating and “racifying”, you must create undotbs2 and logfiles for thread 2 to match number and size of thread 1.

  1. SQL> create undo tablespace undotbs2 datafile ‘+DATA’ size 4G;
  2. SQL> alter database add logfile thread 2 group 5 size 4096M;   
  3. SQL> alter database enable public thread 2; 
  4. Verify these parameters are set in your pfile:

control_files **Update location info from +data and +reco Location         

db_name1.thread=1                                                                                                                                 db_name2.thread=2                                                                                                           db_name1.undo_tablespace=’UNDOTBS1′                                                                 db_name2.undo_tablespace=’UNDOTBS2′                                                                   db_name1.local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=xdt*client01-vip)(PORT=9653))’ db_name2.local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=xdt*client02-vip)(PORT=9653))’ db_name1.instance_number=1 db_name2.instance_number=2 *.remote_listener=’xdt*-scan:9653’5.

5. edit $ORACLE_HOME/dbs/init<db_name>.ora to a one line file reading location of new spfile:

spfile=’$ORACLE_HOME/dbs/init<db_name>.ora’
6. copy file to $ORACLE_HOME/dbs/init<db_name><inst#>.ora on all nodes

7.   shutdown immediate;

8. Create srvctl setup. See my previous article on converting a single instance to a RAC DB.

9. Log into instance and set cluster database to true:

srvctl start instance -d db_name -i db_name1

SQL> alter system set cluster_database=true scope=spfile sid=’*’;

10. Shutdown and restart database with srvctl:

srvctl stop database -d db_name -o immediate -f

srvctl start database -d db_name

Good luck and let me know if you have any issues! Keep on RACifying out there!

Good morning, RAC and Dataguard Enthusiasts!!

I have already posted how to convert a single instance into a RAC Database. This post will focus on the subtle differences of doing this on a Dataguard Standby. When I have used Dataguard to migrate a RAC database to a new server, I created my Standby as a single instance. To make a viable Switchover, this Standby must now be converted to a RAC Standby. I must warn you up front, these are the steps I took in my 2 node RAC Exadata Server. The steps unique to a standby are in bold font. Test this in your own sandbox before you proclaim this as gospel truth.

The setup pieces can either all be done at the beginning or the end. They are ordered as I performed them so feel free to test and experiment. Some of these steps are assumed but I listed them anyways.

  • Copy/create your password file, wallet, and adump directory from the source database to both nodes. The steps work the same, regardless of the number of nodes.
  • Create oratab entry on both nodes. vi /etc/oratab
  • Add tnsnames entries on both nodes and static listener entry on Standby Node.
    • DB1 =
      (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = xdt**client01-vip)(PORT = ****))
      (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = db1)
      )
      )
    • DB2 =
      (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = xdt**client02-vip)(PORT = ****))
      (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = db2)
      )
      )
  • Shutdown Managed Apply on Dataguard Instance
    • Ensure all standby logs are applied and caught up
    • SQL> select * from v$recovery_progress; **There is a better way to check this but I don’t want to publish someone else’s work as my own. **
    • SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; 
  • Shutdown the single instance database.
    • SQL> shutdown immediate;
    • SQL> exit
  • Set environment to node 1.
    • $ . oraenv
    • ORACLE_SID = [db] ? db1
  • Edit original pfile to make it a cluster database. Remove [*.interconnects] lines to prevent an attempt to connect to the source database.
    • *.cluster_database=true
  • Startup nomount with your original pfile.
    • SQL> startup nomount pfile=’$ORACLE_HOME/dbs/inintdb.ora’
  • Create spfile from pfile. This creates a one line, pointer file in the /dbs directory.
    • SQL> create spfile=’+DATA_XDT**/DB/PARAMETERFILE/spfiledb.ora’ from pfile=’$ORACLE_HOME/dbs/inintdb.ora’;
  • Shutdown database.

Create srvctl setup
etlpc used as concrete example:
——————————-
# ETLPC
##
srvctl remove database -d ETLPCDG
srvctl add database -d ETLPCDG -c RAC \
-o /u01/app/oracle/product/11.2.0.4/dbhome_1 \
-m int.carlsonwagonlit.com \
-p +DATA_XDT*/ETLPC/PARAMETERFILE/spfileETLPCDG.ora \
-s OPEN -t IMMEDIATE -n ETLPCDG

srvctl add instance -d ETLPCdg -i ETLPCdg1 -n xdt*dbadm01
srvctl add instance -d ETLPCdg -i ETLPCdg2 -n xdt*dbadm02
–srvctl add instance -d etlpc -i etlpc3 -n xdt*dbadm03
–srvctl add instance -d etlpc -i etlpc4 -n xdt*dbadm04

srvctl enable instance -d ETLPCdg -i “ETLPCdg1,ETLPCdg2”
–srvctl disable instance -d etlpc -i “etlpc1,etlpc2,etlpc3,etlpc4”

srvctl config database -d ETLPCdg
srvctl status database -d ETLPCdg

  • Startup instance 1 in mount. This will startup mount with the spfile on ASM.
    • $ srvctl start instance -d etlpcdg -i etlpcdg1 -o mount
  • Startup Managed Apply on Node 1
    • SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
  • Once MRP0 process is running standby logs are applied, startup node 2 in mount mode:
    • $ srvctl start instance -d etlpcdg -i etlpcdg2 -o mount

I hope this clears up the differences between converting a normal database to RAC verses converting a standby database to RAC. Please let me know if you have any questions or comments.

Thanks for reading!

Jason

Good afternoon, Oracle Miracle Workers! It’s database migration time, in preparation for upgrades to 12.1.0.2 or maybe 12.2 on premise. No, I’m not ready to go to the cloud. Sorry, Oracle, hold your horses. So here’s how we are doing it.

  1. Prepare the Primary Database:

  • Open a sql plus session and enable FORCE LOGGING Mode:
SQL> select force_logging from v$database;
NO
SQL> alter database force logging:
  • Create Standby Redo Logs: Note: When you clone the database for standby, these redo logs will automatically be created on the standby database.
SQL> ALTER DATABASE ADD STANDBY LOGFILE ‘+RECO’ size 50M; --these should match the size of your online redo logs. I created one more than the number of my online redo logs (5).
  • Set the LOG_ARCHIVE_DEST_1 init Parameter to use the FRA as the first archive log location:
SQL> alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
  • Set the LOG_ARCHIVE_CONFIG and LOG_ARCHIE_DEST_2 init parameters on the primary db:
SQL> alter system set log_archive_config = 'dg_config=(PRIMARY_db,STANDBY_db)';

SQL> alter system set log_archive_dest_2 ='SERVICE=STANDBY_DB  NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME= STANDBY_DB ';
  • Turn on ARCHIVELOGMODE if needed. Requires a shutdown.
 SQL> select log_mode from v$database;

ARCHIVELOG

  1. Set up the tnsnames and listener entries. 

This, to me, is always the trickiest step. In my test case, I am creating a single instance standby from a 2 Node RAC Primary Database. I will make it a RAC at the end. A static listener entry is required on the standby server to accommodate the clone.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_NAME=standby_db)
      (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1)
      (SID_NAME=standby_db)
     )
  1. Make a pfile from the primary database and copy it to the standby server:

SQL> create pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initstandby_db.ora' from spfile;

Prepare it as if you were doing a clone but reverse the LOG_ARCHIVE_CONFIG and the LOG_ARCHIVE_DEST_2 settings. Ensure the db_unique name is = to the standby database name and ensure the db_name is the same as the primary db_name.

  1. Prepare the Standby Database:

    • Copy the password file and wallet entries from the primary to the standby server
    • Create an entry in /etc/oratab using the db_unique_name (standby_db).
  2. Set your environment to standby database and startup nomount using your new pfile.

SQL> startup nomount pfile =’ /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/initstdbydb.ora’;
SQL> exit
  1. RMAN Steps. 

I utilized Commvault for the Backup and duplicate steps. Takes the fun out of it but it I highly recommend it if it is at your disposal.

  • Take a full backup and use the backup to do a duplicate to standby:
run {
backup database plus archivelog;
backup current controlfile for standby;  
}
  • Optionally, run it again archivelog only.
run {
sql "Alter System Archive Log Current";
Backup ArchiveLog all ;
Backup current controlfile for standby;
}
  • Copy those files to a shared file system or a file system with the same structure.
  • Run the Duplicate to Standby:
connect target sys/pw@<primarydb>
connect auxiliary/pw@<standby> 
run {
set until sequence <seqno> thread <thrdno>
duplicate target database for standby
dorecover ;
     }
  1. Verify your standby is set up and applying logs.

    • Log in to the standby and query v$database;
  SQL> select db_unique_name ,primary_db_unique_name,database_role,switchover_status,open_mode,log_mode,force_logging from v$database;
  1. Perform a log switch on the primary database:

SQL> alter system switch logfile;

  1. Query standby to see if archived logs are being applied.

SQL> select sequence# ,first_time, next_time, applied from v$archived_log order by 1;

My first two attempts failed due to incorrect tnsnames settings. Once I got that settled, it worked famously. In the near future, I will post about setting up a Snapshot Standby and converting the DB to RAC.

 

 

 

 

 

 

 

 

Good afternoon, Oracle Database Security Professionals!

I was tasked to investigate a Schema that had direct access to some tables it had no business having. I was told to remove this direct access. Should be a quick easy task, right?

  1. I looked at the user’s permissions with SQL Plus. The user had no direct access to any of the tables I was told it had access to.
  2. I created a user with the create like function in SQL Developer DBA View.
  3. I logged into my new user and ran the following query:SQL> select * from all_tables where owner in (‘Schema_1′,’Schema_2’);Wow! My new user had access to all the tables my customer told me it had access to. I thought about public synonyms but that would still require access to the table. I even tried to revoke access to one of the tables. Got something along the lines of “Can’t revoke a privilege that is not granted by Grantee”. I am a SYSDBA. How can I not revoke?
  4. Next, I looked at Grants on the individual table…

grants_on_tables

What’s this? Who is PUBLIC? Well, PUBLIC seemed to be granting access to this table to any Schema or User that can connect to the database. I was quite correct in my assumption. After confirming with the customer, what I did next was to revoke these table privileges as SYSDBA.

SQL> revoke all on “SCHEMA”.”TABLE_NAME” from “PUBLIC”;

The statement below also works but would require 11 steps instead of one.

SQL> revoke SELECT on “SCHEMA”.”TABLE_NAME” from “PUBLIC”;

Finally, I went back to my created user to confirm:

SQL> select * from all_tables where owner in (‘ODS’,’DOC’);

no rows selected

Now that is what I expected. I never knew there was a way to grant public use of objects in Oracle. Now I know. This may be news to some of you as I seem to be constantly learning what I don’t know about Oracle. The bottom line is, be careful of what you grant to PUBLIC.

Thanks for reading!

Jason

Great post by Adityanath’s Oracle Blog! This helped me tremendously as well! I was getting Archiver Hung Error because my archivelog backups had been failing to delete archivelogs for over 60 days. Thanks!

Adityanath's Oracle Blog

We got issue in one of the production environment, RMAN archivelog backup was failing with following errors:


kgefec: fatal error 0 
kgepop: no error frame to pop to for error 603

We were using following command in rcv file to backup archivelogs:


BACKUP archivelog all delete input;

After searching on metalink, I found one note with same scenario : RMAN Archivelog backup with DELETE INPUT fails kgepop: no error frame to pop to for error 603 (Doc ID 1921228.1)

According to this note, RMAN errors out when over 5000 archivelogs were being backed up resulting in an excessively large pl/sql to be executed.

Yes in my case, RMAN was processing more than 2TB archives in one go.


SELECT NAME,ROUND(SPACE_LIMIT / 1048576) SPACE_LIMIT_MB,ROUND(SPACE_USED / 1048576) SPACE_USED_MB,ROUND(((SPACE_USED / 1048576) * 100) / (SPACE_LIMIT / 1048576), 2) PRC_USED FROM V$RECOVERY_FILE_DEST;
NAME SPACE_LIMIT_MB SPACE_USED_MB PRC_USED
------------------------------ -------------- ------------- ----------
+RECO_FIN 3145728 2085370 66.29

As…

View original post 51 more words

Good afternoon, Oracle Database Enthusiasts! Nothing gets me in the holiday spirit like quarterly patching… Not!! When it comes to Oracle Patching, it will fail for just about any reason it feels like. Here’s another fun one. If you haven’t experienced it yet, read on…

Still here? So…

Two Fridays ago, when patching Grid on Node 1, the patch apply hung for about an hour and then quit. The log sat there quiet for the entire hour. Once the patch finally failed, it spit out all of the errors it was experiencing.

Root cause was this…

Unable to lock Central Inventory.  OPatch will attempt to re-lock.Do you want to proceed? [y|n]

Y (auto-answered by -silent)

User Responded with: Y

OPatch will sleep for few seconds, before re-trying to get the lock…

OUI-10022:The target area /u01/app/oraInventory cannot be used because it is in an invalid state.

OPatchSession cannot load inventory for the given Oracle Home /u01/app/11.2.0.4/grid. Possible causes are:

No read or write permission to ORACLE_HOME/.patch_storage

Central Inventory is locked by another OUI instance

No read permission to Central Inventory

The lock file exists in ORACLE_HOME/.patch_storage

The Oracle Home does not exist in Central Inventory

 

Had the log actually spit out the warning during the patch attempt, I might have been able to fix it on the fly. No such luck. Instead it spit 100 copies of the above error AFTER the patch failed. LOVELY. I ran out of time and had to rollback. 

When there is a lock in /u01/app/oraInventory, it creates a locks directory and a file underneath it like /u01/app/oraInventory/locks/inventory.lock. The solution was to remove the file and the locks directory. I tested this by creating this directory and file manually in our sandbox. I Love Oracle Sandboxes!! If you don’t have one, you really should get one! As I attempted to patch grid on node 1 of our sandbox, it created the exact same scenario. It shut down CRS and just hung for an hour, trying over and over again to create a lock on oraInventory. Once it errored out, I removed the directory and file. Patching of Grid took then less than ten minutes. I am now adding this step to my prerequisites:

Check the /u01/app/oraInventory/locks directory and subsequent files.  If they exist, remove them.

I hope this helps those of you who have not yet applied October’s CPU. Or, if anything, it applies a little comic relief for all you sick techies like me who laugh at other’s misfortunes.

Thanks for reading!

Jason

Update!! I attempted the patch again tonight, armed with my former findings. I patched the Grid Home and Oracle Home and seven Catbundle scripts in less than 2 hours!! Much better this time around!!

 

 

Good morning fellow DBA’s and Backup Admins! Yesterday, Jasonbrownsite reached a significant milestone! We surpassed 10,000 total views! That doesn’t include those who follow us through Twitter, Google Plus and LinkedIn. Regardless, thank you for continuing to read our weekly/monthly blogs and adventures. Without readership, it would make more sense for me just to keep a journal…

So, today’s scenario involves migrating a single instance Oracle database off of an old Solaris server to a newer Exadata Server that happens to be a four node RAC. We attempted to accomplish this with RMAN Duplicate and it did not work to our satisfaction. So here is what we did with success. An alternative method might be to utilize RMAN Transportable Tablespaces but that involves an outage to the source database. That method was not seriously considered. As always, there may be a better way so don’t take this as gospel truth. Just something to consider…

In summary, my partner and I utilized Data Pump to export all of the non system type users from the database on the Solaris Server. From there, we copied the dump files to the Exadata node 1. Step three was to create a clean four node RAC database with DBCA, the trusty yet out of date database creation tool. Next we created the new users and ran some DDL to create procedures and functions which I will summarize in this post. The final step was to import the dumpfiles copied from the Solaris side.

  • Create a parfile for your export. Determine which schemas own objects and need to be exported. Due to the size of the db, we created parallelism=25. Your parfile should be adjusted according to your needs:
    • userid=system
    • directory=dbname_dir
    • dumpfile=dbname_%u.dmp
    • logfile=dbname.log
    • filesize=15g
    • parallel=25
    • exclude=SCHEMA: “IN(‘OUTLN’,’PUBLIC’,’SYS’,’SYSTEM’,’WMSYS’,’ETC’)”
    • or schemas=()
  • Run the export.  expdp parfile=dbname.par
  • Copy the dumpfiles to the Exadata server.We used an EXFER utility that collected all the dumpfiles and transferred them simultaneously.
  • Create a four node RAC database on the Exadata server using DBCA.
    • Create a custom database.
    • Make sure to match the blocksize of your source database.
    • I make it a habit to not include OEM setup or extra components during DB creation. It is not needed in this migration.
    • Multiplex your redo logs and controlfiles.
    • Specify your fast recovery area and size, i.e. +RECO_Exadata_Name.
    • Specify your memory settings, archivelog destinations, processes, redo log groups, etc.
    • Create a template for future use. It is much faster to use this if it is needed later.
  • Create tnsnames entries , password file, wallet/s, and adump directory for all four nodes. I assume you already know how to do this.
  • Validate all nodes!
    • srvctl remove database -d dbname
      srvctl add database -d dbname -c RAC \
      -o /u01/app/oracle/product/11.2.0.4/dbhome_1 \
      -p +DATA_XDT*/DBNAME/PARAMETERFILE/spfiledbname.ora \
      -r PRIMARY -s OPEN -t IMMEDIATE -n dbname

      srvctl add instance -d dbname-i dbname1 -n xdt*dbadm01
      srvctl add instance -d dbname-i dbname2 -n xdt*dbadm02
      srvctl add instance -d dbname-i dbname3 -n xdt*dbadm03
      srvctl add instance -d dbname-i dbname4 -n xdt*dbadm04

  • Before commencing with the Import, try to eliminate as much up front errors as possible:
    • Port over the verify_functions for password management:
      • set lines 150 pages 50000 echo on feedback on long 200000000 serveroutput on size unlimited timing off trim on trims on verify off
        SQL> select dbms_metadata.get_ddl(p.object_type,p.object_name,p.owner) ddl_text from dba_procedures p where object_name like ‘VERIFY%’;
      • Run the resultant SQL on the Exadata database.
    • Do the same thing with Profiles, Tablespaces, Users, Roles, and directories and run the resultant SQL:
      • SQL> select dbms_metadata.get_ddl(‘PROFILE’,c.profile) ddl_text from (select distinct profile from dba_profiles) c ;
      • set echo on feedback on lines 150 pages 50000 serveroutput on size unlimited trim on trims on verify off long 2000000
        col ddl_text for a150
        select dbms_metadata.get_ddl(‘TABLESPACE’,t.tablespace_name) ddl_text from (select tablespace_name from dba_tablespaces t order by 1) t;
      • select dbms_metadata.get_ddl(‘USER’,u.username) ddl_text from dba_users u ;
      • select role from dba_roles;
      • select * from dba_directories ;
    • Run the import with the same parameters as the export data pump job.
    • Query dba_objects for invalid objects and recompile them.
    • At this point your shiny, new 4 node RAC database should be ready to turn over to the customer!

As always, please test this process in your own sandbox before deploying to production.

Thanks for reading and making jasonbrownsite.wordpress.com a success!

~Jason