Good afternoon, Oracle Database Security Professionals!

I was tasked to investigate a Schema that had direct access to some tables it had no business having. I was told to remove this direct access. Should be a quick easy task, right?

  1. I looked at the user’s permissions with SQL Plus. The user had no direct access to any of the tables I was told it had access to.
  2. I created a user with the create like function in SQL Developer DBA View.
  3. I logged into my new user and ran the following query:SQL> select * from all_tables where owner in (‘Schema_1′,’Schema_2’);Wow! My new user had access to all the tables my customer told me it had access to. I thought about public synonyms but that would still require access to the table. I even tried to revoke access to one of the tables. Got something along the lines of “Can’t revoke a privilege that is not granted by Grantee”. I am a SYSDBA. How can I not revoke?
  4. Next, I looked at Grants on the individual table…

grants_on_tables

What’s this? Who is PUBLIC? Well, PUBLIC seemed to be granting access to this table to any Schema or User that can connect to the database. I was quite correct in my assumption. After confirming with the customer, what I did next was to revoke these table privileges as SYSDBA.

SQL> revoke all on “SCHEMA”.”TABLE_NAME” from “PUBLIC”;

The statement below also works but would require 11 steps instead of one.

SQL> revoke SELECT on “SCHEMA”.”TABLE_NAME” from “PUBLIC”;

Finally, I went back to my created user to confirm:

SQL> select * from all_tables where owner in (‘ODS’,’DOC’);

no rows selected

Now that is what I expected. I never knew there was a way to grant public use of objects in Oracle. Now I know. This may be news to some of you as I seem to be constantly learning what I don’t know about Oracle. The bottom line is, be careful of what you grant to PUBLIC.

Thanks for reading!

Jason

Advertisements

Great post by Adityanath’s Oracle Blog! This helped me tremendously as well! I was getting Archiver Hung Error because my archivelog backups had been failing to delete archivelogs for over 60 days. Thanks!

Adityanath's Oracle Blog

We got issue in one of the production environment, RMAN archivelog backup was failing with following errors:


kgefec: fatal error 0 
kgepop: no error frame to pop to for error 603

We were using following command in rcv file to backup archivelogs:


BACKUP archivelog all delete input;

After searching on metalink, I found one note with same scenario : RMAN Archivelog backup with DELETE INPUT fails kgepop: no error frame to pop to for error 603 (Doc ID 1921228.1)

According to this note, RMAN errors out when over 5000 archivelogs were being backed up resulting in an excessively large pl/sql to be executed.

Yes in my case, RMAN was processing more than 2TB archives in one go.


SELECT NAME,ROUND(SPACE_LIMIT / 1048576) SPACE_LIMIT_MB,ROUND(SPACE_USED / 1048576) SPACE_USED_MB,ROUND(((SPACE_USED / 1048576) * 100) / (SPACE_LIMIT / 1048576), 2) PRC_USED FROM V$RECOVERY_FILE_DEST;
NAME SPACE_LIMIT_MB SPACE_USED_MB PRC_USED
------------------------------ -------------- ------------- ----------
+RECO_FIN 3145728 2085370 66.29

As…

View original post 51 more words

Good afternoon, Oracle Database Enthusiasts! Nothing gets me in the holiday spirit like quarterly patching… Not!! When it comes to Oracle Patching, it will fail for just about any reason it feels like. Here’s another fun one. If you haven’t experienced it yet, read on…

Still here? So…

Two Fridays ago, when patching Grid on Node 1, the patch apply hung for about an hour and then quit. The log sat there quiet for the entire hour. Once the patch finally failed, it spit out all of the errors it was experiencing.

Root cause was this…

Unable to lock Central Inventory.  OPatch will attempt to re-lock.Do you want to proceed? [y|n]

Y (auto-answered by -silent)

User Responded with: Y

OPatch will sleep for few seconds, before re-trying to get the lock…

OUI-10022:The target area /u01/app/oraInventory cannot be used because it is in an invalid state.

OPatchSession cannot load inventory for the given Oracle Home /u01/app/11.2.0.4/grid. Possible causes are:

No read or write permission to ORACLE_HOME/.patch_storage

Central Inventory is locked by another OUI instance

No read permission to Central Inventory

The lock file exists in ORACLE_HOME/.patch_storage

The Oracle Home does not exist in Central Inventory

 

Had the log actually spit out the warning during the patch attempt, I might have been able to fix it on the fly. No such luck. Instead it spit 100 copies of the above error AFTER the patch failed. LOVELY. I ran out of time and had to rollback. 

When there is a lock in /u01/app/oraInventory, it creates a locks directory and a file underneath it like /u01/app/oraInventory/locks/inventory.lock. The solution was to remove the file and the locks directory. I tested this by creating this directory and file manually in our sandbox. I Love Oracle Sandboxes!! If you don’t have one, you really should get one! As I attempted to patch grid on node 1 of our sandbox, it created the exact same scenario. It shut down CRS and just hung for an hour, trying over and over again to create a lock on oraInventory. Once it errored out, I removed the directory and file. Patching of Grid took then less than ten minutes. I am now adding this step to my prerequisites:

Check the /u01/app/oraInventory/locks directory and subsequent files.  If they exist, remove them.

I hope this helps those of you who have not yet applied October’s CPU. Or, if anything, it applies a little comic relief for all you sick techies like me who laugh at other’s misfortunes.

Thanks for reading!

Jason

Update!! I attempted the patch again tonight, armed with my former findings. I patched the Grid Home and Oracle Home and seven Catbundle scripts in less than 2 hours!! Much better this time around!!

 

 

Good morning fellow DBA’s and Backup Admins! Yesterday, Jasonbrownsite reached a significant milestone! We surpassed 10,000 total views! That doesn’t include those who follow us through Twitter, Google Plus and LinkedIn. Regardless, thank you for continuing to read our weekly/monthly blogs and adventures. Without readership, it would make more sense for me just to keep a journal…

So, today’s scenario involves migrating a single instance Oracle database off of an old Solaris server to a newer Exadata Server that happens to be a four node RAC. We attempted to accomplish this with RMAN Duplicate and it did not work to our satisfaction. So here is what we did with success. An alternative method might be to utilize RMAN Transportable Tablespaces but that involves an outage to the source database. That method was not seriously considered. As always, there may be a better way so don’t take this as gospel truth. Just something to consider…

In summary, my partner and I utilized Data Pump to export all of the non system type users from the database on the Solaris Server. From there, we copied the dump files to the Exadata node 1. Step three was to create a clean four node RAC database with DBCA, the trusty yet out of date database creation tool. Next we created the new users and ran some DDL to create procedures and functions which I will summarize in this post. The final step was to import the dumpfiles copied from the Solaris side.

  • Create a parfile for your export. Determine which schemas own objects and need to be exported. Due to the size of the db, we created parallelism=25. Your parfile should be adjusted according to your needs:
    • userid=system
    • directory=dbname_dir
    • dumpfile=dbname_%u.dmp
    • logfile=dbname.log
    • filesize=15g
    • parallel=25
    • exclude=SCHEMA: “IN(‘OUTLN’,’PUBLIC’,’SYS’,’SYSTEM’,’WMSYS’,’ETC’)”
    • or schemas=()
  • Run the export.  expdp parfile=dbname.par
  • Copy the dumpfiles to the Exadata server.We used an EXFER utility that collected all the dumpfiles and transferred them simultaneously.
  • Create a four node RAC database on the Exadata server using DBCA.
    • Create a custom database.
    • Make sure to match the blocksize of your source database.
    • I make it a habit to not include OEM setup or extra components during DB creation. It is not needed in this migration.
    • Multiplex your redo logs and controlfiles.
    • Specify your fast recovery area and size, i.e. +RECO_Exadata_Name.
    • Specify your memory settings, archivelog destinations, processes, redo log groups, etc.
    • Create a template for future use. It is much faster to use this if it is needed later.
  • Create tnsnames entries , password file, wallet/s, and adump directory for all four nodes. I assume you already know how to do this.
  • Validate all nodes!
    • srvctl remove database -d dbname
      srvctl add database -d dbname -c RAC \
      -o /u01/app/oracle/product/11.2.0.4/dbhome_1 \
      -p +DATA_XDT*/DBNAME/PARAMETERFILE/spfiledbname.ora \
      -r PRIMARY -s OPEN -t IMMEDIATE -n dbname

      srvctl add instance -d dbname-i dbname1 -n xdt*dbadm01
      srvctl add instance -d dbname-i dbname2 -n xdt*dbadm02
      srvctl add instance -d dbname-i dbname3 -n xdt*dbadm03
      srvctl add instance -d dbname-i dbname4 -n xdt*dbadm04

  • Before commencing with the Import, try to eliminate as much up front errors as possible:
    • Port over the verify_functions for password management:
      • set lines 150 pages 50000 echo on feedback on long 200000000 serveroutput on size unlimited timing off trim on trims on verify off
        SQL> select dbms_metadata.get_ddl(p.object_type,p.object_name,p.owner) ddl_text from dba_procedures p where object_name like ‘VERIFY%’;
      • Run the resultant SQL on the Exadata database.
    • Do the same thing with Profiles, Tablespaces, Users, Roles, and directories and run the resultant SQL:
      • SQL> select dbms_metadata.get_ddl(‘PROFILE’,c.profile) ddl_text from (select distinct profile from dba_profiles) c ;
      • set echo on feedback on lines 150 pages 50000 serveroutput on size unlimited trim on trims on verify off long 2000000
        col ddl_text for a150
        select dbms_metadata.get_ddl(‘TABLESPACE’,t.tablespace_name) ddl_text from (select tablespace_name from dba_tablespaces t order by 1) t;
      • select dbms_metadata.get_ddl(‘USER’,u.username) ddl_text from dba_users u ;
      • select role from dba_roles;
      • select * from dba_directories ;
    • Run the import with the same parameters as the export data pump job.
    • Query dba_objects for invalid objects and recompile them.
    • At this point your shiny, new 4 node RAC database should be ready to turn over to the customer!

As always, please test this process in your own sandbox before deploying to production.

Thanks for reading and making jasonbrownsite.wordpress.com a success!

~Jason

Hello again, Oracle Flashback Captains!!

Here’s a Friday lunchtime special. Since my lunchtime is being interrupted by creating a restore point for a customer, I thought I would share my fun with you! There’s nothing earth shaking or innovative here. Just some good, old fashioned syntax to store away for later use.

  • First make sure archivelog is set:

select log_mode,flashback_on from v$database;

LOG_MODE     FLASHBACK_ON

———— ——————

ARCHIVELOG   NO

1 row selected.

  • Next turn on flashback:

Alter database flashback ON;

LOG_MODE     FLASHBACK_ON

———— ——————

ARCHIVELOG   YES

 

  • Next see if any restore points are still out there.  Drop them if they exist:

 

SELECT NAME, SCN, TIME, DATABASE_INCARNATION#, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE FROM V$RESTORE_POINT;

no rows selected

  • Now create restore point with the date time and db name

CREATE RESTORE POINT restorePointName_todaysDate_timeOfRestorePoint GUARANTEE FLASHBACK DATABASE;

  • Make sure the sql stmt worked.

SELECT NAME, SCN, TIME, DATABASE_INCARNATION#, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE FROM V$RESTORE_POINT;

  • Should you need to use this restore point…
    • flashback database to restore point restorePointName_todaysDate_timeOfRestorePoint
    • On a RAC Database…

>>> srvctl stop database -d dbname
set env
. oraenv
sqlplus / as sysdba
startup mount;
flashback database to restore point restorePointName_todaysDate_timeOfRestorePoint;
alter database open resetlogs;
drop restore point before_upgrade;
shutdown immediate;
exit;
>>> srvctl start database -d dbname
Alter database flashback off;

  • After Deployment set Flashback off

DROP RESTORE POINT restorePointName_todaysDate_timeOfRestorePoint;

Alter database flashback off;

  • Verify flashback is off

select log_mode,flashback_on from v$database;

Thanks for reading!

Jason

Happy Friday, Students of the Oracle!!
I had the opportunity to take over a failed clone job on CommVault. The immediate solution was to accomplish the clone using RMAN Duplicate. I believe the steps were to run a full backup on RMAN and then duplicate this to the target database. This succeeded and was definitely a solution that satisfied the customer.My job was to figure out why it failed. This is the error stack I received. The last three lines were the comments from my coworker as he left the building to go out of town on holiday.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 06/01/2016 13:22:54
RMAN-05501: aborting duplication of target database
RMAN-05556: not all datafiles have backups that can be recovered to SCN 7447956627439
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found – aborting restore
RMAN-06023: no backup or copy of datafile 18 found to restore
RMAN-06023: no backup or copy of datafile 16 found to restore
RMAN-06023: no backup or copy of datafile 14 found to restore
RMAN-06023: no backup or copy of datafile 12 found to restore
RMAN-06023: no backup or copy of datafile 9 found to restore
RMAN-06023: no backup or copy of datafile 7 found to restore
RMAN-06023: no backup or copy of datafile 6 found to restore
RMAN-06023: no backup or copy of datafile 1 found to restore
RMAN>
Recovery Manager complete.
]
That may point out a CV issue.
Why would the backups be reporting success yet be unrestorable?
Past reasons dealt with interface card setup in the CV infrastructure.
 This sounds really bad but it’s actually quite simple. The worst case scenario is we have a corrupted database. The best case scenario is the backup we chose to duplicate with is incomplete.
  1. My first step is to look at the complete RMAN log for the backup. I’ll spare you the complete script but it was attempting a Full (level 1) backup. In other words, an incremental full. When it came across missing datafiles, it defaulted to a Full (level 0) backup and that succeeded. This answered my coworker’s first two questions. It reported success because it was a successful backup despite failing to be an incremental backup. Also, this log gave me 90% certainty it is not a CommVault(CV) issue. **Hint** If we are going to use a past backup to clone with, make sure there are no warnings in the RMAN log and ensure it is a full backup. It was actually a stroke of good fortune that the clone failed.
  2. From CommVault, I ran a full (level 0) backup on both the source and target databases. In our setup, CommVault will not recognize backups run outside of the CommVault tool. Upon further investigation, there had not been a full (level 0) backup run on either database for some time. Our backup strategy and schedule is designed to run a full(level 0) backup once a week. This needs to be looked at. Both backups succeeded and the log showed no RMAN-06023 errors. This full backup can now be used to clone in the future.
  3. In CommVault, I navigated to my schedule policy for these databases. The schedule showed a full backup scheduled once a week. However, in the Backup Options tab, the Incremental box was selected. I quickly changed this to Full and saved my changes.
  4. I verified the logs for last night’s incremental backups on both databases. No errors. Problem solved!!

The moral of the story is don’t trust “Recovery Manager complete” as proof the backup succeeded without warnings. Also, I advise against using a Full (Level 1) backup to clone with.

Thanks for reading!

Jason

MongoDB: To Blog or Not to Blog

Posted: June 1, 2016 in MongoDB
Tags: ,

Good morning, Oracle Elitists!

I have a quick question for all of you who read my posts. I have recently been given the opportunity at work to learn and support our MongoDB project. For those of you who are unfamiliar with Mongo, here is Wikipedia’s definition:

MongoDB (from humongous) is a free and open-source cross-platform document-oriented database. Classified as aNoSQL database, MongoDB avoids the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas (MongoDB calls the format BSON), making the integration of data in certain types of applications easier and faster. MongoDB is developed by MongoDB Inc. and is free and open-source, published under a combination of the GNU Affero General Public License and the Apache License. As of July 2015, MongoDB is the fourth most popular type of database management system, and the most popular for document stores.

Since I exploit my work experiences to find subjects to blog about, I need to hear from you. Should I blog about this subject? Are you interested in Mongo or are you exclusively Oracle DBA’s who have no desire to hear about new trends? Believe me, I understand that point of view. Most of my DBA coworkers are in that camp.

Please let me know. A lack of response means I will have to taint this otherwise exclusively Oracle blog page with a test post with a MongoDB theme.

Thanks, in advance, for your thoughtful responses.

Jason (should I change my name to Json?) Brown