Posts Tagged ‘Migrate Solaris to UNIX RAC’

Good morning fellow DBA’s and Backup Admins! Yesterday, Jasonbrownsite reached a significant milestone! We surpassed 10,000 total views! That doesn’t include those who follow us through Twitter, Google Plus and LinkedIn. Regardless, thank you for continuing to read our weekly/monthly blogs and adventures. Without readership, it would make more sense for me just to keep a journal…

So, today’s scenario involves migrating a single instance Oracle database off of an old Solaris server to a newer Exadata Server that happens to be a four node RAC. We attempted to accomplish this with RMAN Duplicate and it did not work to our satisfaction. So here is what we did with success. An alternative method might be to utilize RMAN Transportable Tablespaces but that involves an outage to the source database. That method was not seriously considered. As always, there may be a better way so don’t take this as gospel truth. Just something to consider…

In summary, my partner and I utilized Data Pump to export all of the non system type users from the database on the Solaris Server. From there, we copied the dump files to the Exadata node 1. Step three was to create a clean four node RAC database with DBCA, the trusty yet out of date database creation tool. Next we created the new users and ran some DDL to create procedures and functions which I will summarize in this post. The final step was to import the dumpfiles copied from the Solaris side.

  • Create a parfile for your export. Determine which schemas own objects and need to be exported. Due to the size of the db, we created parallelism=25. Your parfile should be adjusted according to your needs:
    • userid=system
    • directory=dbname_dir
    • dumpfile=dbname_%u.dmp
    • logfile=dbname.log
    • filesize=15g
    • parallel=25
    • exclude=SCHEMA: “IN(‘OUTLN’,’PUBLIC’,’SYS’,’SYSTEM’,’WMSYS’,’ETC’)”
    • or schemas=()
  • Run the export. ¬†expdp parfile=dbname.par
  • Copy the dumpfiles to the Exadata server.We used an EXFER utility that collected all the dumpfiles and transferred them simultaneously.
  • Create a four node RAC database on the Exadata server using DBCA.
    • Create a custom database.
    • Make sure to match the blocksize of your source database.
    • I make it a habit to not include OEM setup or extra components during DB creation. It is not needed in this migration.
    • Multiplex your redo logs and controlfiles.
    • Specify your fast recovery area and size, i.e. +RECO_Exadata_Name.
    • Specify your memory settings, archivelog destinations, processes, redo log groups, etc.
    • Create a template for future use. It is much faster to use this if it is needed later.
  • Create tnsnames entries , password file, wallet/s, and adump directory for all four nodes. I assume you already know how to do this.
  • Validate all nodes!
    • srvctl remove database -d dbname
      srvctl add database -d dbname -c RAC \
      -o /u01/app/oracle/product/11.2.0.4/dbhome_1 \
      -p +DATA_XDT*/DBNAME/PARAMETERFILE/spfiledbname.ora \
      -r PRIMARY -s OPEN -t IMMEDIATE -n dbname

      srvctl add instance -d dbname-i dbname1 -n xdt*dbadm01
      srvctl add instance -d dbname-i dbname2 -n xdt*dbadm02
      srvctl add instance -d dbname-i dbname3 -n xdt*dbadm03
      srvctl add instance -d dbname-i dbname4 -n xdt*dbadm04

  • Before commencing with the Import, try to eliminate as much up front errors as possible:
    • Port over the verify_functions for password management:
      • set lines 150 pages 50000 echo on feedback on long 200000000 serveroutput on size unlimited timing off trim on trims on verify off
        SQL> select dbms_metadata.get_ddl(p.object_type,p.object_name,p.owner) ddl_text from dba_procedures p where object_name like ‘VERIFY%’;
      • Run the resultant SQL on the Exadata database.
    • Do the same thing with Profiles, Tablespaces, Users, Roles, and directories and run the resultant SQL:
      • SQL> select dbms_metadata.get_ddl(‘PROFILE’,c.profile) ddl_text from (select distinct profile from dba_profiles) c ;
      • set echo on feedback on lines 150 pages 50000 serveroutput on size unlimited trim on trims on verify off long 2000000
        col ddl_text for a150
        select dbms_metadata.get_ddl(‘TABLESPACE’,t.tablespace_name) ddl_text from (select tablespace_name from dba_tablespaces t order by 1) t;
      • select dbms_metadata.get_ddl(‘USER’,u.username) ddl_text from dba_users u ;
      • select role from dba_roles;
      • select * from dba_directories ;
    • Run the import with the same parameters as the export data pump job.
    • Query dba_objects for invalid objects and recompile them.
    • At this point your shiny, new 4 node RAC database should be ready to turn over to the customer!

As always, please test this process in your own sandbox before deploying to production.

Thanks for reading and making jasonbrownsite.wordpress.com a success!

~Jason

Advertisements