Monday, August 30, 2010

Temporary Tablespace Space Monitoring

Here are some quick SQL's to monitor the usage of TEMP Tablespace:

Space Usage of TEMP Tablespace:

SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

Temp Space Usage by Session:

SELECT S.sid || ',' || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
S.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) sort_ops
FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE T.session_addr = S.saddr
AND S.paddr = P.addr
AND T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
S.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;
 
Temp Space Usage by Statement:
 
SELECT S.sid || ',' || S.serial# sid_serial, S.username,
T.blocks * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
T.sqladdr address, Q.hash_value, Q.sql_text
FROM v$sort_usage T, v$session S, v$sqlarea Q, dba_tablespaces TBS
WHERE T.session_addr = S.saddr
AND T.sqladdr = Q.address (+)
AND T.tablespace = TBS.tablespace_name
ORDER BY S.sid;

Thursday, August 26, 2010

Waits Related to Initialization Parameters

Wait Problem: Potential Fix
Free buffer: Increase the DB_CACHE_SIZE; shorten the checkpoint; tune the code
Buffer busy:  Segment Header — Add freelists or freelist groups or use ASSM
Buffer busy:  Data Block — Separate hot data; use reverse key indexes; small block sizes
Buffer busy:  Data Block — Increase initrans and/or maxtrans
Buffer busy:  Undo Header —Use automatic undo management
Buffer busy:  Undo Block — Commit more; use automatic undo management
Latch free:  Investigate the detail (listing in next table of this chapter for fixes)
Log buffer space: Increase the log buffer; use faster disks for the redo logs
Scattered read: Indicates many full table scans — tune the code; cache small tables
Sequential read: Indicates many index reads — tune the code (especially joins)
Write complete waits: Adds database writers; checkpoint more often; buffer cache too small

Latch Problem: Potential Fix

Library cache:  Use bind variables; adjust the shared_pool_size
Shared pool:  Use bind variables; adjust the shared_pool_size
Row cache objects: Increase the shared pool. This is not a common problem.
Cache buffers chain:  If you get this latch wait, it means you need to reduce logical I/O rates by tuning and minimizing the I/O requirements of the SQL involved. High I/O rates could be a sign of a hot block (meaning
a block highly accessed). Cache buffer lru chain latch contention can be resolved by increasing the size of the buffer cache and thereby reducing the rate at which new blocks are introduced into the buffer cache. You should adjust DB_BLOCK_BUFFERS, and possible DB_BLOCK_SIZE. Multiple buffer pools can help
reduce contention on this latch. You can create additional cache buffer lru chain latches by adjusting the configuration parameter DB_BLOCK_LRU_LATCHES. You may be able to reduce the load on the cache buffer chain latches by increasing the configuration parameter. _DB_BLOCK_HASH_BUCKETS may need to be increased or set to a prime number (in pre-9i versions).

Library Cache SQL

The following query uses the V$LIBRARYCACHE view to examine the reload ratio in the library cache:


select Sum(Pins) "Hits",
Sum(Reloads) "Misses",
((Sum(Reloads) / Sum(Pins)) * 100)"Reload %"
from V$LibraryCache;

Data Dictionary Cache Hit Ratio SQL

SQL:

select ((1 - (Sum(GetMisses) / (Sum(Gets) + Sum(GetMisses)))) * 100) "Hit Rate"
from V$RowCache
where Gets + GetMisses <> 0;

Session or System Modifiable Parameter SQL

There are two key fields in the V$PARAMETER view:
■ ISSES_MODIFIABLE Indicates if a user with the ALTER SESSION privilege can modify
this initialization parameter for their session.

■ ISSYS_MODIFIABLE Indicates if someone with ALTER SYSTEM privilege can modify
this particular parameter.


This query displays the initialization parameters that can be modified with an ALTER SYSTEM or ALTER SESSION command (partial result displayed):

select name, value, isdefault, isses_modifiable, issys_modifiable
from v$parameter
where issys_modifiable <> 'FALSE'
or isses_modifiable <> 'FALSE'
order by name;




Data Cache Hit Ratio SQL

Query to view the data cache hit ratio:

column phys format 999,999,999 heading 'Physical Reads'
column gets format 999,999,999 heading ' DB Block Gets'
column con_gets format 999,999,999 heading 'Consistent Gets'
column hitratio format 999.99 heading ' Hit Ratio '
select sum(decode(name,'physical reads',value,0)) phys,
sum(decode(name,'db block gets',value,0)) gets,
sum(decode(name,'consistent gets', value,0)) con_gets,
(1 - (sum(decode(name,'physical reads',value,0)) /
(sum(decode(name,'db block gets',value,0)) +
sum(decode(name,'consistent gets',value,0))))) * 100 hitratio
from v$sysstat;

Tuesday, August 24, 2010

Cardinality in Oracle Table Joins

Cardinatly: This is the number of rows that Oracle expects to process or expects to be the number rows that will be returned after joining the tables from the join condition.

More to follow...............

Check RMAN Backup Status

SQL to report on all backups i.e., Full, Incremental and Archive log backups:

col STATUS format a9
col hrs format 999.99
select SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi') end_time,
elapsed_seconds/3600 hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;

SQL to report on all Full and Incremental backup but not Archive log backups:

col STATUS format a9
col hrs format 999.99
select SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi') end_time,
elapsed_seconds/3600 hrs
from V$RMAN_BACKUP_JOB_DETAILS
where input_type='DB INCR'
order by session_key;

To check progress of RMAN Backup:

select SID, START_TIME,TOTALWORK, sofar, (sofar/totalwork) * 100 done,
sysdate + TIME_REMAINING/3600/24 end_at
from v$session_longops
where totalwork > sofar
AND opname NOT LIKE '%aggregate%'
AND opname like 'RMAN%';

Monday, April 12, 2010

Resize/Add/Move/Rename/Drop datafile and temporary datafile in Oracle

Resize a datafile and tempfile:

SQL> alter database datafile '/u10/oradata/TEST/test.tst_tbspace.data.01.dbf' resize 50M;

SQL> alter database tempfile '/u09/oradata/TEST/test.tmp_tbspace.data.01.dbf' resize 50M;

Add a datafile:

SQL> alter tablespace tst_tbspace1 add datafile '/u10/oradata/TEST/test.tst_tbspace.data.02.dbf' size 2000M;

SQL> alter tablespace tmp_tbspace add tempfile '/u09/oradata/TEST/test.tmp_tbspace.data.02.dbf' size 2000M;


Steps to move and/or rename datafile in oracle:

1. Shutdown the database and exit out of the SQL prompt - Database level step

Ex: SQL> shutdown

2. At the unix prompt copy or move the datafile from current location to new location using the OS copy (cp) command - OS level step

Ex: $ mv /u10/oradata/TEST/test.tbspace.data.01.dbf /u01/oradata/TEST/test.tst_tbspace.data.01.dbf

you change the location and also rename the file at a time if you need to.

Ex: $ mv /u10/oradata/TEST/test.tst_tbspace.data.01.dbf /u01/oradata/TEST/test.tst_tbspace.datafile.02.dbf

3. Open a SQL session and startup mount the database - Database level step

Ex: SQL> startup mount

4. Rename the datafile using oracle rename command - Database level step

Ex: SQL> alter database rename file '/u10/oradata/TEST/test.tst_tbspace.data.01.dbf'
                 to '/u01/oradata/TEST/test.tbspace.data.01.dbf';

5. Open the database - Database level step

Ex: SQL> alter database open;

                       (or)

      SQL> shutdown

      SQL> startup

Drop datafile:

SQL>  alter database datafile '/u10/oradata/TEST/test.tst_tbspace.data.01.dbf' offline drop;

(or)

SQL> alter tablespace tst_tbspace drop datafile '/u10/oradata/TEST/test.tst_tbspace.data.01.dbf';

Rename/Copy a Table

A table in oracle can be renamed using the following syntax:

alter table table_name rename to new_table_name;

Ex: SQL> alter table dept rename to hist_dept;

Renaming a table doesn't update the oracle objects such as HTML DB, PL/SQL, Views some of which might be invalid based on their dependancy on the renamed object.

A copy of the original table can be created using the following syntax:

create table new_table_name as select * from table_name;

Ex: SQL> create hist_dept as select * from dept;

we need not necessarily need to include all the columns from the original table we can create a copy based on our choice of columns that we want to be included in the new table. The new table can be created either from one single table or multiple tables. The advantage of using CREATE TABLE....AS SELECT..(CTAS) it will create the new table with datatypes similar to old table(s).

A copy of the original table created including only the columns of our choice from a single table:

create table new_table_name as select column1, column2 from table_name;

Ex: SQL> create hist_dept as select dept_id, dept_name from dept;


A copy of the original table created including only the columns of our choice from a two tables:

create table new_table_name as select t1.column1, t2.column1, t2.column2 from table_name1 t1, table_name2 t2 where t1.column1 = t2.column1;

Ex: SQL> create hist_dept as select d.dept_id, e.emp_id, e.emp_anem from dept d, employee e where
                 d.dept_id = e.dept_id;

Thursday, April 08, 2010

Oracle Background Processes

Oracle Background Porcesses: To maximize performance and accommodate many users, a multiprocess Oracle Database system uses background processes. Background processes consolidate functions that would otherwise be handled by multiple database programs running for each user process. Background processes asynchronously perform I/O and monitor other Oracle Database processes to provide increased parallelism for better performance and reliability.

Basic Oracle Background Processes:
  • Database writer (DBWn): The database writer writes modified blocks from the database buffer cache to the datafiles. Oracle Database allows a maximum of 20 database writer processes (DBW0-DBW9 and DBWa-DBWj). The DB_WRITER_PROCESSES initialization parameter specifies the number of DBWn processes. The database selects an appropriate default setting for this initialization parameter or adjusts a user-specified setting based on the number of CPUs and the number of processor groups.

  • Log writer (LGWR): The log writer process writes redo log entries to disk. Redo log entries are generated in the redo log buffer of the system global area (SGA). LGWR writes the redo log entries sequentially into a redo log file. If the database has a multiplexed redo log, then LGWR writes the redo log entries to a group of redo log files.

  • Checkpoint (CKPT): At specific times, all modified database buffers in the system global area are written to the datafiles by DBWn. This event is called a checkpoint. The checkpoint process is responsible for signalling DBWn at checkpoints and updating all the datafiles and control files of the database to indicate the most recent checkpoint.

  • System monitor (SMON): The system monitor performs recovery when a failed instance starts up again. In a Real Application Clusters database, the SMON process of one instance can perform instance recovery for other instances that have failed. SMON also cleans up temporary segments that are no longer in use and recovers dead transactions skipped during system failure and instance recovery because of file-read or offline errors. These transactions are eventually recovered by SMON when the tablespace or file is brought back online.

  • Process monitor (PMON): The process monitor performs process recovery when a user process fails. PMON is responsible for cleaning up the cache and freeing resources that the process was using. PMON also checks on the dispatcher processes (described later in this table) and server processes and restarts them if they have failed.

  • Archiver (ARCn): One or more archiver processes copy the redo log files to archival storage when they are full or a log switch occurs.

  • Recoverer (RECO): The recoverer process is used to resolve distributed transactions that are pending because of a network or system failure in a distributed database. At timed intervals, the local RECO attempts to connect to remote databases and automatically complete the commit or rollback of the local portion of any pending distributed transactions.

  • Dispatcher (Dnnn): Dispatchers are optional background processes, present only when the shared server configuration is used.

  • Global Cache Service (LMS): In a Real Application Clusters environment, this process manages resources and provides inter-instance resource control.
Source: Oracle Documentation

Oracle Dedicated and Shared Server Processes

Oracle creates server processes whenever a user tries to connect to a database for handling all user requestss. Below are the two ways a server process can be:
  • A Dedicated Server Process, which serves only one user or one user connection
  • A Shared Server Process, which server multiple users or multiple user connections
Dedicated Server Processes:

Oracle databases are by default configured for dedicated server process and if you need to enable Shared server process you need to configure it manually by setting one or more initialization parameters. However a Dedicated server process is always required  for:
  • To submit a batch job (for example, when a job can allow little or no idle time for the server process)
  • To use Recovery Manager (RMAN) to back up, restore, or recover a database
To request a dedicated server connection when Oracle Database is configured for shared server, users must connect using a net service name that is configured to use a dedicated server. Specifically, the net service name value should include the SERVER=DEDICATED clause in the connect descriptor.




Figure 1: Oracle Database Dedicated Server Processes

Shared Server Process:



Figure 2: Oracle Database Shared Server Processes

In Shared server configuration a client user process connect a dispatcher and the dispatcher then communicates with the oracle database for client request processing once the processing is done oracle then sends back the output or solution back to the dispatcher which in turn communicates it to the client user process. One dispatcher can serve multiple client user processes by connection pooling which can be enabled for shared server process. Further, shared server can be configured for session multiplexing, which combines multiple sessions for transmission over a single network connection in order to conserve the operating system's resources.

Initialization Parameters for Shared Server:

The following initialization parameters control shared server operation:
  • SHARED_SERVERS: Specifies the initial number of shared servers to start and the minimum number of shared servers to keep. This is the only required parameter for using shared servers.
  • MAX_SHARED_SERVERS: Specifies the maximum number of shared servers that can run simultaneously.
  • SHARED_SERVER_SESSIONS: Specifies the total number of shared server user sessions that can run simultaneously. Setting this parameter enables you to reserve user sessions for dedicated servers.
  • DISPATCHERS: Configures dispatcher processes in the shared server architecture.
  • MAX_DISPATCHERS: Specifies the maximum number of dispatcher processes that can run simultaneously. This parameter can be ignored for now. It will only be useful in a future release when the number of dispatchers is auto-tuned according to the number of concurrent connections.
  • CIRCUITS: Specifies the total number of virtual circuits that are available for inbound and outbound network sessions.
Enabling Shared Server:

Shared server is enabled by setting the SHARED_SERVERS initialization parameter to a value greater than 0. The other shared server initialization parameters need not be set. Because shared server requires at least one dispatcher in order to work, a dispatcher is brought up even if no dispatcher has been configured.

Determining a Value for SHARED_SERVERS and others:

The SHARED_SERVERS initialization parameter specifies the minimum number of shared servers that you want created when the instance is started. After instance startup, Oracle Database can dynamically adjust the number of shared servers based on how busy existing shared servers are and the length of the request queue.
You can limit the number of shared servers that can be created by setting MAX_SHARED_SERVERS parameter specifies the maximum number of shared servers that can be automatically created by PMON. After you set the SHARED_SERVERS parameter oracle decides the number of shares servers that needs to be created based on how busy the system is.

The SHARED_SERVER_SESSIONS initialization parameter specifies the maximum number of concurrent shared server user sessions. Setting this parameter, which is a dynamic parameter, lets you reserve database sessions for dedicated servers.
Configuring Dispatchers:

Ex: This is a typical example of setting the DISPATCHERS initialization parameter.

DISPATCHERS="(PROTOCOL=TCP)(DISPATCHERS=2)"

You can use multiple different protocols for different dispatchers within the same initialization parameter.

Ex: DISPATCHERS='(PROT=tcp)(DISP=5)', '(PROT-tcps)(DISP=3)'

Monitor the following views to determine the load on the dispatcher processes:

  • V$QUEUE
  • V$DISPATCHER
  • V$DISPATCHER_RATE
Shutting down specific Dispatches process:
To identify the name of the specific dispatcher process to shut down, use the V$DISPATCHER dynamic performance view.

SQL> SELECT NAME, NETWORK FROM V$DISPATCHER;

Each dispatcher is uniquely identified by a name of the form Dnnn.To shut down dispatcher D002, issue the following statement:

SQL>ALTER SYSTEM SHUTDOWN IMMEDIATE 'D002';

Monitoring Shared Server:
The following views are useful for obtaining information about your shared server configuration and for monitoring performance.

  • V$DISPATCHER: Provides information on the dispatcher processes, including name, network address, status, various usage statistics, and index number.
  • V$DISPATCHER_CONFIG: Provides configuration information about the dispatchers.
  • V$DISPATCHER_RATE: Provides rate statistics for the dispatcher processes.
  • V$QUEUE: Contains information on the shared server message queues.
  • V$SHARED_SERVER: Contains information on the shared servers.
  • V$CIRCUIT: Contains information about virtual circuits, which are user connections to the database through dispatchers and servers.
  • V$SHARED_SERVER_MONITOR: Contains information for tuning shared server.
  • V$SGA: Contains size information about various system global area (SGA) groups. May be useful when tuning shared server.
  • V$SGASTAT: Contains detailed statistical information about the SGA, useful for tuning.
  • V$SHARED_POOL_RESERVED: Lists statistics to help tune the reserved pool and space within the shared pool.
Source: Oracle Documentation

All about startup migrate and startup upgrade

startup migrate:

=> Used to upgrade a database till 9i.
See the step 11 from the 9.2 manual upgrade guide :
http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96530/upgrade.htm#1009472


=> Used to downgrade a database till 9i.
See the step 5 from the 9.2 downgrade guide :
http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96530/downgrad.htm#248958


=> Used to downgrade a database since 10g.
See the step 18 from the 10.2 downgrade guide :
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14238/downgrade.htm#sthref415


 startup upgrade
=> Used to upgrade a database since 10g.
See the step 7 from the 10.2 manual upgrade guide :
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14238/upgrade.htm#CACGGHJC

Tuesday, April 06, 2010

Recompiling Invalid Schema Objects

Objects in a database tend to get invalid due to operation such as patches, DDL changes and upgrades etc. The invalid objects are to compiled to ensure proper health of the database and user's access to them. I will provide here the ways you can compile objects in Oracle.

Identifying Invalid Objects:

col format OBJECT_NAME a40

SQL> SELECT OBJECT_NAME,
                            OBJECT_TYPE,
                            OWNER,
                            STATUS
           FROM DBA_OBJECTS
           WHERE STATUS='INVALID'
           ORDER BY OBJECT_NAME, OBJECT_TYPE, OWNER;

to get invalid objects list for a particular schema below is the SQL:

col format OBJECT_NAME a40

SQL> SELECT OBJECT_NAME,
                          OBJECT_TYPE,
                          OWNER,
                          STATUS
           FROM DBA_OBJECTS
          WHERE OWNER='DEPT' AND STATUS='INVALID'
          ORDER BY OBJECT_NAME, OBJECT_TYPE, OWNER;

DBMS_UTILITY.COMPILE_SCHEMA:
DBMS_UTILITY package proides COMPILE_SCHEMA procedure to compile all the objects in a schema:

SQL> EXECUTE DBMS_UTILITY.COMPILE_SCHEMA ('SCHEMA_NAME');

UTLRP and UTLPRP:
UTLRP and UTLPRP are the oracle provided scripts to compile all invalid objects in oracle database.

Ex: SQL> ?/rdbms/admin/utlrp

Manual Approach:
Invalid Objects can be compiled individually after you have the list of invalid objects in your oracle database. Below are some of the examples:

SQL> ALTER PACKAGE package_name COMPILE;

SQL> ALTER PACKAGE package_name COMPILE BODY;

SQL> ALTER PROCEDURE procedure_name COMPILE;

SQL> ALTER FUNCTION function_name COMPILE;

SQL> ALTER TRIGGER trigger_name COMPILE;

SQL> ALTER VIEW view_name COMPILE;


An alternative approach is to use the DBMS_DDL package to perform the recompilations:

SQL> EXEC DBMS_DDL.alter_compile('PACKAGE', 'SCHEMA_NAME', 'PACKAGE_NAME');

SQL> EXEC DBMS_DDL.alter_compile('PACKAGE BODY', 'SCHEMA_NAME', 'PACKAGE_NAME');

SQL> EXEC DBMS_DDL.alter_compile('PROCEDURE', 'SCHEMA_NAME', 'PROCEDURE_NAME');

SQL> EXEC DBMS_DDL.alter_compile('FUNCTION', 'SCHEMA_NAME', 'FUNCTION_NAME');

SQL> EXEC DBMS_DDL.alter_compile('TRIGGER', 'SCHEMA_NAME', 'TRIGGER_NAME');
 
In addition to the above approaches you can write your own script to get the invalid objects and recompile them using the manual approach to recompile them all included in the script.

Monday, April 05, 2010

How to get DDL for an object: TABLE, INDEX, PACKAGE....

DBMS_METADATA is a package that can be used to get DDL for TABLE, INDEXES etc. Below is a quick view of how it works:


SQL> select dbms_metadata.get_ddl('TABLE','IDX3_TAB') from dual;

 
The output would be:

CREATE TABLE "SCOTT"."IDX3_TAB"
( "NAME" VARCHAR2(30),
"ID" NUMBER,
"ADDR" VARCHAR2(100),
"PHONE" VARCHAR2(30)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS"


To get the create table definition without the storage clause you could do as follows:

SQL> EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);

The output should be PL/SQL procedure successfully completed.


And then if you run

set long 100000
select dbms_metadata.get_ddl('TABLE','IDX3_TAB') from dual;

would return

CREATE TABLE "SCOTT"."IDX3_TAB"
( "NAME" VARCHAR2(30),
"ID" NUMBER,
"ADDR" VARCHAR2(100),
"PHONE" VARCHAR2(30)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 LOGGING TABLESPACE "USERS"

But the above package wasn't useful when I tried to get DDL information for a package body based on my experience and DBA_SOURCE from into help for that. Below is an example:
 
SQL> SELECT TEXT FROM DBA_SOURCE WHERE NAME='PKG_DEPT_PROCESS';

Thursday, April 01, 2010

CURSOR_SHARING Parameter in Oracle

CURSOR_SHARING parameter is the one that oracle uses to control whether it will auto-bind a SQL statement or not.

For example:
SQL> select * from dept where dept_id = 'literal_value';

Oracle takes the above statement and replaces the predicate with something as below:

SQL> select * from dept where dept_id=:"SYS_B_0";

And now the oracle compares the above sql statement to the plans that it already has in the memory to see if the plan can be reused or not, perhaps leading to a better utilization of shared_pool and reduction in number hard parses performed by the system.

The cursor_sharing parameter can be set to three values:
  • EXACT: This is the default setting. With this value in place, the query is not rewritten to use bind variables.
  • FORCE: This setting rewrites the query, replacing all literals with bind values and setting up a one-size-fits-all plan—a single plan for the rewritten query. I'll demonstrate what that implies in a moment.
  • SIMILAR: This setting also rewrites the query, replacing the literals with bind variables, but can set up different plans for different bind variable combinations. This last point is why CURSOR_SHARING=SIMILAR might reduce the number of plans generated. Because multiple plans may be generated, the setting of SIMILAR may or may not reduce the number of actual plans you observe in the shared pool.
An example to show how CURSOR_SHARING works with setting equal to EXACT, FORCE and SIMILAR:

SQL> alter session set cursor_sharing=EXACT;

Session altered.

SQL> select * from dual CS_EXACT where dummy='A';

no rows selected

SQL> select * from dual CS_EXACT where dummy='B';

no rows selected

SQL> alter session set cursor_sharing=FORCE;

Session altered.

SQL> select * from dual CS_FORCE where dummy='A';

no rows selected

SQL> select * from dual CS_FORCE where dummy='B';

no rows selected

SQL> alter session set cursor_sharing=SIMILAR;

Session altered.

SQL> select * from dual CS_SIMILAR where dummy='A';

no rows selected

SQL> select * from dual CS_SIMILAR where dummy='B';

no rows selected

SQL> select sql_text
2 from v$sql
3 where sql_text like 'select * from dual CS% where dummy=%'
4 order by sql_text;

SQL_TEXT
--------------------------------------------------------------------------------
select * from dual CS_EXACT where dummy='A'
select * from dual CS_EXACT where dummy='B'
select * from dual CS_FORCE where dummy=:"SYS_B_0"
select * from dual CS_SIMILAR where dummy=:"SYS_B_0"

CURSOR_SHARING=EXACT: From the above example as we can see oracle uses different plans for each of the select statements which include the word "CS_EXACT". With this setting every SQL statement you excute will be new and a new plan is generated for every query that we execute and the plans are not shared. And a new entry is created in V$SQL as you can see from above for every SQL Statement we execute.

CURSOR_SHARING=FORCE: When this is the setting for the parameter then for each statement that I have executed above which include the word "CS_FORCE" the literal values 'A' and 'B' are replaced by "SYS_B_0". The oracle uses the same plan for each of the SQL whether the predicate is either 'A' or 'B' since rest of the statement is similar and thus we see just one entry in V$SQL for the two SQL that we have executed.

CURSOR_SHARING=SIMILAR: When this is the setting for the parameter then for each statement that I have executed above which include the word "CS_FORCE" the literal values 'A' and 'B' are replaced by "SYS_B_0". The oracle uses the same plan for each of the SQL whether the predicate is either 'A' or 'B' since rest of the statement is similar and thus we see just one entry in V$SQL for the two SQL that we have executed. So, the settings FORCE and SIMILAR looks similar huh? But there is one difference between both i.e., when the setting is SIMILAR oracle not only checks for similar looking statements but also compares the plans. For example when I executed the SELECT statement with the literal value 'A' it generates a plan which will say a FULL TABLE SCAN for the statement. Now when I executed the same SELECT statement with the literal value 'B' the it also generates a plan for the statement and compares to the plan that is already stored i.e., when literal value is 'A'. If the explain plan for the SELECT statement when literal is 'A' a FULL TABLE SCAN and also a FULL TABLE SCAN (lets assume for now) when the literal value is 'B' then we see only one entry in the dynamic view V$SQL. If both the plans are different say a FULL TABLE SCAN when literal value is 'A' and an INDEX SCAN when literal value is 'B' which is not in this case as per our assumption then we will see two different entries in V$SQL for the SQL statements which contain the word "CS_SIMILAR" and you would see something as below (which is not with our example above): The results displayed shown below are with respect to an imaginary table "t" which is a big table of about atleast 100 rows and has an index on it and the data is skewed

SQL> alter session set cursor_sharing=FORCE;

Session altered.

SQL> select * from t CS_FORCE where t_id=1;

1 row selected.

SQL> select * from t CS_FORCE where t_id='99';

1 row selected.

SQL> alter session set cursor_sharing=SIMILAR;

Session altered.

SQL> select * from t CS_SIMILAR where t_id='1';

1 row selected.

SQL> select * from t CS_SIMILAR where t_id='99';

1 row selected.

SQL> select sql_text

2 from v$sql
3 where sql_text like 'select * from t CS% where t_id=%'
4 order by sql_text;

SQL_TEXT
--------------------------------------------------------------------------------
select * from t CS_FORCE where t_id=:"SYS_B_0"
select * from t CS_SIMILAR where t_id=:"SYS_B_0"
select * from t CS_SIMILAR where t_id=:"SYS_B_0"

From the above example we can see that the plans are different say a FULL TABLE SCAN when literal value is '1' and an INDEX SCAN when literal value is '99' we see two different entries in V$SQL for the SQL statements which contain the word "CS_SIMILAR".

Database Structures: Logical Structures

Oracle database logical structures mainly comprise of Tablespaces, Segements, Extents and Oracle Datablocks.

I will present the information starting with the finest logical structure Oracle Datablocks.

Oracle Datablocks: Oracle datablocks are at the finest level granularity, all of the oracle datatbase data is stored in oracle datablocks. One oracle datablock corresponds to specific number of bytes which occupy the same number of bytes on the physical disk space. The size of a datablock is determined by the initialization parameter DB_BLOCK_SIZE. In addition to the one specified already you can specify upto 5 additional datablock sizes

Extent: Extent is the next level of oracle logical database space. A extent is comprised specific number of contiguous datablocks, obtained in a single allocation, used to store a specific type of information.

Segment: A segment is a set of extents allocated for a certain logical structure. The segments can be of one of following type data segment,index segment,temporary segment,rollback segment.

Tablespace: Each database is logically divided into one or more tablespaces. One or more datafiles are explicitly created for each tablespace to physically store the data of all logical structures in a tablespace. The combined size of the datafiles in a tablespace is the total storage capacity of the tablespace.

Database Structures: Physical Structures

A database consists of Physical Structures and Logical Structures in this post I will post information regarding the physical structures in simple terms:

Datafiles: Contain all of the database data; logical structures , such as tables, indexes, packages, procedures, functions, triggers and etc.

Redo Log Files: Hold records of all changes made to the database for recovery purposes.

Control Files: Record the physical structure and status of the database

Parameter File: Contain startup values for database parameters (often referred as the init.ora file )


Note: The explanation provided here is for understanding only.

Wednesday, March 31, 2010

Oracle Resource Manager

Oracle Resource Manager (ORM)
Goal of ORM:

To give Oracle Database Server more control over resource management decisions, thus preventing problems resulting from inefficient operating system management.

Problems addressed by ORM:• Excessive overhead
• Inefficient scheduling
• Inappropriate allocation of resources
• Inability to manage database-specific resources, such as parallel execution servers and active sessions

How above problems can be addressed:
• Guarantee specific users with certain amount of processing resources irrespective of the load on the system and the no. of users

Ex: HR_BATCH_GROUP – 50% CPU will be allocated for Batch processing jobs irrespective of the load on the system

• Distribute available processing resources by allocating percentages of CPU time to different users and applications. In a data warehouse, a higher percentage may be given to ROLAP (relational on-line analytical processing) applications than to batch jobs.

Ex: CPU Resources distribution for HR:
HR_BATCH_GROUP – 50% CPU
HR_WF_REPT_AUD_METADATA_GROUP – 20% CPU
HR_ADHOC_HRLINK_WF_GROUP – 15% CPU
HR_ADHOC_BUS_USER_GROUP – 5% CPU
HR_STEAM_DBLINK_USERS_GROUP – 5% CPU
OTHER_GROUPS – 5% CPU

• Limit to 10 concurrent sessions for HR_ADHOC_BUS_USER_GROUP so that members of the group performing any operation are limited to maximum of 10 sessions within the group. Additional sessions beyond the maximum are queued for execution, but you can specify a timeout period, after which queued jobs will terminate. Prevent the execution of operations that the optimizer estimates will run for a longer time than a specified limit
Ex: The parameter setting for this is ACTIVE_SESS_POOL_P1 => 10
Ex: The parameter setting for this is QUEUEING_P1 => 60
Ex: The parameter setting for this is MAX_EST_EXEC_TIME => 2700 (45 minutes)

• (not doing in below example) Allow automatic switching of users from one group to another group based on administrator defined criteria. If a member of a particular group of users creates a session that executes for longer than a specified amount of time, that session can be automatically switched to another group of users with different resource requirements.

• (not doing in below example) Create an undo pool. This pool consists of the amount of undo space that can be consumed in by a group of users.

• (not doing in below example) Limit the amount of time that a session can be idle. This can be further defined to mean only sessions that are blocking other sessions.

• (not doing in below example) Configure an instance to use a particular method of allocating resources. You can dynamically change the method, for example, from a daytime setup to a nighttime setup, without having to shut down and restart the instance.

• (not doing in below example) Allow the cancellation of long-running SQL statements and the termination of long-running sessions.

Terminology:
Resource Plan: A resource plan is a way to group a set of resource consumer groups together and specify how resources should be divided among them

Resource consumer groups: Allow the administrator to group user sessions together by resource requirements

Resource allocation methods: determine what policy to use when allocating for any particular resource. Resource allocation methods are used by resource plans and resource consumer groups
Resource plan directives: are a means of assigning consumer groups to particular plans and partitioning resources among consumer groups by specifying parameters for each resource allocation method

Subplans: allow further subdivision of resources among different users of an application

Levels: provide a mechanism to specify distribution of unused resources among available users. Up to eight levels of resource allocation can be specified

Steps for implementing ORM:
1. Execute the HR_PLAN.sql and check for any errors. HR_PLAN.sql is an example of how an ORM Plan can be created.

2. Do the following once ORM has been installed succesfully without any errors
$ sqlplus "/ as sysdba"

a. SQL> 'ALTER SYSTEM SET RESOURCE_MANAGER_PLAN ='HR_PLAN';

b. Set the init parameter RESOURCE_MANAGER_PLAN = HR_PLAN in database init parameter file"

Note: Both the above steps are required to make sure the created plan is active both dynamically and permanent
to ensure that if database is bounced the plan is still active in case database cannot be bounced now

HR_PLAN:

BEGIN

DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;

DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'HR_PLAN', COMMENT => 'HR ORM Plan');

DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_BATCH_GROUP', COMMENT => 'HR Batch group');

DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_WF_REPT_AUD_METADATA_GROUP', COMMENT => 'Reporting group');

DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_ADHOC_HRLINK_WF_GROUP', COMMENT => 'Ad-hoc WF HRLink group');

DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_ADHOC_BUS_USER_GROUP', COMMENT => 'Ad-hoc Business User group');

DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_STEAM_DBLINK_USERS_GROUP', COMMENT => 'Steam DBLink User group');

DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_BATCH_GROUP', COMMENT => 'Batch Process', CPU_P1 => 50);

DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_WF_REPT_AUD_METADATA_GROUP', COMMENT => 'Reporting Process', CPU_P1 =
> 20);

DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_ADHOC_HRLINK_WF_GROUP', COMMENT => 'Ad-hoc WF HRLink group', CPU_P1 =
> 15);

DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_ADHOC_BUS_USER_GROUP', COMMENT => 'Ad-hoc Business User group', CPU_P
1 => 5, MAX_EST_EXEC_TIME => 2700 , ACTIVE_SESS_POOL_P1 => 10, QUEUEING_P1 => 60);

DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_STEAM_DBLINK_USERS_GROUP', COMMENT => 'Steam User group', CPU_P1 => 5
);

DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN',GROUP_OR_SUBPLAN => 'OTHER_GROUPS', COMMENT => 'mandatory', CPU_P1 => 5);

DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('SYSADM', 'HR_BATCH_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('SYSADM', 'HR_BATCH_GROUP');

DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('WFLINK', 'HR_ADHOC_HRLINK_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('WFLINK', 'HR_ADHOC_HRLINK_WF_GROUP');

DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('BUS_AH32366', 'HR_ADHOC_BUS_USER_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('BUS_AH32366', 'HR_ADHOC_BUS_USER_GROUP');

DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('HR_LINK_APAC','HR_ST_DBLINK_USERS_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('HR_LINK_APAC','HR_STEAM_DBLINK_USERS_GROUP');

DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('CSBOUTA1','HR_REPT_AUD_METADATA_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('CSBOUTA1','HR_WF_REPT_AUD_METADATA_GROUP');


END;
/


Steps for enabling ORM:
There are two ways for enabling ORM after you create your custom ORM Plan in the database. Below are the steps:
1. Enable it dynamically from a SQLPLUS Session
Ex: SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=’HR_PLAN’;
2. Enable it permanently in init.ora if you are using pfile
Ex: In init file add the parameter – RESOURCE_MANAGER_PLAN=’HR_PLAN’;

Both the above steps are recommended as database needs to be bounced if you update init file and if that cannot be done immediately Step 1 will take care of enabling the Resource Manager immediately and even if the database is bounced Step 2 will take care of enabling it Resource Manager automatically.

Steps for disabling ORM:There are two ways for enabling ORM after you create your custom ORM Plan in the database. Below are the steps:

1. Disable it dynamically from a SQLPLUS Session
Ex: SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=’ ’; (space in between)
2. Disable it permanently in init.ora if you are using pfile
Ex: In init file comment out or remove the parameter – RESOURCE_MANAGER_PLAN=’HR_PLAN’;

ORM FAQ's:
Q: How does the Resource Manager work in a multi-instance server?

A: The resource manager cannot manage resources across instances, only within an instance. It would not be useful in multi-instance on a single server situation. If you are having more than one instance, you must use OS tools such as taking a big honking Sun E10k and setting up separate domains (so all of a sudden you don't have more than one instance per "server")

Q: Overhead of using "Oracle Resource Manager"?

A: The implementation of a resource plan is going to take some resources itself (of course). You do not quantify at all what "non-insignificant" is - so I will not address that (everyone has a different definition for that...). so it is not really possible to comment.
Note: The above question has been taken from www.asktom.oracle.com website

Q: When does "Oracle Resource Manager" (ORM) kick in?

A: Resource manager basically does nothing until a resource is totally maxed out (you cannot put CPU in the bank and use it later). So, even if you say "100% to group A", unless group A actually needs to consume 100%. Resource manager is not a "fence", but rather an attempt to assure you that if group A makes a demand for X%, it will be given to them.

Q: Will there be any degradation in response time or execution time of jobs than the regular taken time?

A: Yes, when the server is busy and is at 100% CPU or more the jobs will take a long time to finish than what they might be taking regularly. But the response time will almost be the same.

Q. What is the overhead after implementing ORM?

A: There was no specific answer for what is the overhead of using ORM resource regulation. So, I did a test to find out the results of which are presented here but there is minimal to no impact while the server is not 100% busy and there might very little overhead when the server is 100% which hasn't been quantified yet but I did some estimation based on some tests.

Reference: Based on information read from Oracle Documentation and other online websites

AIX commands that everyone needs

Kernel Command:

How to know if you are running a 32-bit kernel or 64-bit kernel?

To display the info at the unix prompt type:

$ bootinfo -K


Hardware Commands:

How much real memory does a unix server have?

$ bootinfo -r

$ lsattr -El sys0 -a realmem

The above commands display memory info in KB.

How to list attributes of various devices on your system?

To list current values of the attributes for tape device, rmt0 type:

$ lsattr -l rmt0 -E

To list default values of the attributes for tape device, rmt0 type:

$ lsattr -l rmt0 -D

To display system level attributes type:

$ lsattr -E -l sys0

To display processors info on a unix system type:

$ lscfg | grep proc

To display hard disks information and how many of them are in use type:

$ lspv

To list information about a specific physical volume ex: hdisk1 type:

$ lspv hdisk1

To get a detail configuration of a unix system type:

$ lscfg

The following options provide specific information:

-p: Displays platform-specific device information. The flag is applicable to AIX 4.2.1 or later.
-v: Displays the VPD (Vital Product Database) found in the customized VPD object class.

For example, to display details about the tape drive, rmt0, type:

$ lscfg -vl rmt0

You can obtain similar information using prtconf command.


To find out about chip type, system name, node name, model number and etc use uname command with various options to get details about unix system. Some example are listed below:

uname -p : Displays the chip type of the system. For example, PowerPC.
uname -r : Displays the release number of the operating system.
uname -s : Displays the system name. For example, AIX.
uname -n : Displays the name of the node.
uname -a : Displays the system name, nodename, version, machine ID.
uname -M : Displays the system model name. For example, IBM, 9114-275.
uname -v : Displays the operating system version.
uname -m : Displays the machine ID number of the hardware running the system.
uname -u : Displays the system ID number.

Some commands related to AIX:

To display version, release and maintenance level of AIX running on your system type:

$ oslevel -r
(or)
$ lslpp -h bos.rte

To see information about what service pack is installed on your system type:

$ oslevel -s

To create a file system:
The below command will create, within volume group testvg, a jfs filesystem of 10MB with mount point /fs1:

$ crfs -v jfs -g testvg -a size=10M -m /fs1

The below command will create, within volume group testvg, a jfs2 filesystem of 10MB with mount point /fs2 having read only permissions:

$ crfs -v jfs2 -g testvg -a size=10M -p ro -m /fs2 Note: In AIX 5.3 this jfs2 file system can be shrunk as well.

To change the size of a file system: Ex: To increase /usr file system by 1000000 512-byte blocks type:

$ chfs -a size=+1000000 /usr

To mount a CD:

$ mount -V cdrfs -o ro /dev/cd0 /cdrom

To mount a file system:

$ mount /dev/fslv02 /test

To mount all default file systems (all standard file systems in the /etc/filesystems file marked by the mount=true attribute):

$ mount {-a|all}

To unmount a file system: Ex: To unmount /test file system

$ unmount /test

To display mounted file systems:

$ mount

To remove a file system: Ex: To remove /test file system

$ rmfs /test

To defrag a file system:

$ defragfs /home

To get installed filesets information:

$ lslpp -l

To determince of all required maintenance level file systems are installed:

$ instfix -i | grep ML

To determince if a particular fix is installed on a file system:

$ instfix -ik IY24043

To determince if filesets have required pre-requisites and are completely installed:

$ lppchk -v

To determince amount of paging space allocated and in use:

$ lsps -a

Volume groups and logical volumes:

To create a volume group:

$ mkvg -y name_of_volume_group -s partition_size list_of_hard_disks

To change characteristics of a volume group

$ chvg

To create a logical volume:

$ mklv -y name_of_logical_volume name_of_volume_group number_of_partition

To increase the size of a logical volume:

$ extendlv lv05 3

To display all logical volumes that are part of a volume group: Ex: rootvg

$ lsvg -l rootvg

To list information about a logical volume:

$ lslv lv1

To remove a logical volume:

$ rmlv lv7

To show volume groups in the system, type:

$ lsvg

To show all the characteristics of rootvg, type:

$ lsvg rootvg

To show disks used by rootvg, type:

$ lsvg -p rootvg

Network Commands:

To get the IP address of a system:

$ ifconfig -a

To identify the network interfaces of a system:

$ lsdev -Cc if
(or)
$ ifconfig -a

To get information about a specific network interface ex: tr0:

$ ifconfig tr0

To activate network interface in a system: Ex: tr0

$ ifconfig tr0 up

To deactivate network interface in a system: Ex: tr0

$ ifconfig tr0 down

Process Commands:

To get information on what OS processes are running type:

$ ps -ef | grep

To get information about particular OS level processes type for example about all oracle processes:

$ ps -ef | grep ora

File System commands:

To get information about your current working directory:

$ pwd

To create a file in your current directory for example test type:

$ touch test

To change permissions on a file chmod is the command to be used:

$ chmod 777 test

The above command gives read, write and execute permissions to all users in the system.

To get more on chmod permission codes based on need check:
http://mistupid.com/internet/chmod.htm

To rename a file without creating a duplicate copy of file test:

$ mv test test.sh
The above command renames "test" file to "test.sh"

To rename a file and make a duplicate copy of the original file test:

$ cp test test.sh
The above command renames "test" file to "test.sh" as a duplicate copy to original file "test"

To compress a file:

$ compress test

To copy and compress a file at the same time:

$ gzip < /path/to/file1 > /path/to/file1.gz

To copy and uncompress a file at the same time:

$ gunzip < /path/to/file1.gz > /path/to/file1

To copy and compress a folder at the same time:

$ tar cf - | gzip -c - > /.tar.gz

To uncompress a folder: cd to the directory you want the files extracted in and run:

$ gunzip -c /.tar.gz | tar xf -

How to generate public/private key pair in OpenSSH?

Note: The link provided here is for informational purpose only and to provide a centralized location for some of the basic things that a DBA/Anyone needs to know.

Monday, March 29, 2010

Database Conversion from Solaris to AIX: On Target conversion

Important Note: Before proceeding with database conversion from one platform to another is recommended to make a copy of original database on a file system different from the source database and do the rest of conversion steps. This is to avoid downtime and any performance impacts and corruption issues.

1. Some Pre-Requisites or assumptions of source and target directories etc:
ORCL Directory Structure for datafiles Solaris_Server1 or SOURCE server:
/u01/ORCL/oradata/
/u02/ORCL/oradata/
/u03/ORCL/oradata/
/u04/ORCL/oradata/
/u05/ORCL/oradata/
/u06/ORCL/oradata/

Directory for environment files: /dbms/oracle/local/ORCL/etc

ORCL_COPY Directory Structures for datafiles Solaris_Server1 or SOURCE server:
/aix_conv/ORCL_COPY/u01/
/aix_conv/ORCL_COPY/u02/
/aix_conv/ORCL_COPY/u03/
/aix_conv/ORCL_COPY/u04/
/aix_conv/ORCL_COPY/u05/
/aix_conv/ORCL_COPY/u06/

Directory for environment files: /dbms/oracle/local/ORCL_COPY/etc

ORCL_COPY Directory Structures for datafiles AIX_Server1 or TARGET server:
/aix_conv/ORCL_COPY/u01/
/aix_conv/ORCL_COPY/u02/
/aix_conv/ORCL_COPY/u03/
/aix_conv/ORCL_COPY/u04/
/aix_conv/ORCL_COPY/u05/
/aix_conv/ORCL_COPY/u06/

Directory for environment files: /dbms/oracle/local/ORCL_COPY/etc

ORCL_COPY Directory Structures for datafiles AIX_Server1 or TARGET server:
/u01/ORCL_NEW/oradata/
/u02/ORCL_NEW/oradata/
/u03/ORCL_NEW/oradata/
/u04/ORCL_NEW/oradata/
/u05/ORCL_NEW/oradata/
/u06/ORCL_NEW/oradata/

Directory for environment files: /dbms/oracle/local/ORCL_NEW/etc

2. For ORCL_NEW we will be using raw data files for redo logs and.

Pre-Requisite Checks and prep work to transfer the data files to TARGET Server:

1. Some Pre-Conditions to be made on source(copy) or newly created database ORCL_COPY from ORCL:

I. Check list of platforms that we can convert source database into as below.

SQL> SELECT * FROM v$db_transportable_platform;

II. Shutdown and start newly created ORCL_COPY database in "READ ONLY" mode.

SQL> shutdown immediate;

Comment out below parameters:
audit_trail ='DB'
audit_file_dest='/aix_conv/ORCL_COPY/oradata'
db_cache_size =4000M
pga_aggregate_target =9000M
shared_pool_size =5000M

Add below entries from the init.ora file:
db_cache_size =1000M
pga_aggregate_target =2000M
shared_pool_size =1000M

Note: The above change of values to three parameters are recommended since we
don't need high memory values for a copy of the source database and also
recommended in order to minimize impact on the actual database.

SQL> startup mount;

SQL> alter database open read only;

SQL> select name,open_mode from v$database;

NAME OPEN_MODE
--------- ----------
ORCL_COPY READ ONLY

III.Use the following PL/SQL to ensure that no Internal Conditions exist that would prevent the transport of the database.

SQL> set serveroutput on
declare
db_ready boolean;
begin
db_ready:= dbms_tdb.check_db ('AIX-Based Systems (64-bit)', dbms_tdb.skip_none);
End;
/

Success output should be:

PS/PL SQL executed successfully

In case of an error output:

Note 1: If output error is similar to:

declare
*
ERROR at line 1:
ORA-25153: Temporary Tablespace is Empty
ORA-06512: at "SYS.DBMS_TDB", line 346
ORA-06512: at "SYS.DBMS_TDB", line 397
ORA-06512: at line 4

Resolution:
a. Shutdown the database and startup is READ/WRITE mode
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP

b. Add a datafile to TEMP Tablespace and proceed to next step
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE ‘file_location_as_of_your_other_datafiles’ SIZE 500MB;

c. Repeat steps 1 & 2 again and see if you get the “Success output”, if you get it then proceed with next steps

Note 2: If the output error is similar to

Some files in the database have not been converted to the 10.0 compatibility format. Open the database in READ/WRITE mode and alter tablespaces HRAPP, HRLARGE, HRWORK, ............... to READ/WRITE. Then open the database in READ-ONLY mode and retry.

PL/SQL procedure successfully completed.

Resolution:
a. Shutdown the database and startup in READ/WRITE mode

SQL> SHUTDOWN IMMEDIATE

SQL> STARTUP

b. Take one of the above mentioned tablespaces offline and bring them back online again.

SQL> alter tablespace HRLARGE offline;

SQL> alter tablespace HRLARGE online;

Note: Repeat Step b similarly for all the tablespaces as mentioned in Note2 output before proceeding.

c. Repeat steps 1 & 2 again and see if you get the “Success output”, if you get it proceed with next steps

IV. Execute below PL/SQL to identify any external objects.

SQL> set serveroutput on
declare
external boolean;
begin
external := dbms_tdb.check_external;
end;
/

Success output:

PS/PL SQL executed successfully

In case of an error output:

Note: If output ERROR is something similar to:

The following directories exist in the database:

SYS.DATA_PUMP_DIR

Resolution:
a. Shutdown and startup the database
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP

b. Drop the above mentioned directory

SQL> DROP DIRECTORY DATA_PUMP_DIR;

c. Repeat Steps 1 & 3 and see if you get “Success output”, if you get it the proceed with next steps

V. If no error resolutions were as in Step 2: Note 2. Please do the following steps just for any one tablespace. This is to avoid an “ORA-“error or target server regarding SYSTEM file header before proceeding further on. This will update the system file header.

EX: SQL> alter tablespace HRLARGE offline;

SQL> alter tablespace HRLARGE online;

2. Backup control file to trace using the below command and shutdown database

SQL> alter database backup controlfile to trace;

The trace file is generated in the udump directory by the format xxxxxx.trc.

3. Copy the control file trace to temporary location where you need to stage the database for conversion AIX_Server1:/aix_convp. Remember this is the actual location where you will placing the database and will bringing up for future production usage.

4. Copy the init.ora file of ORCL_COPY to AIX_Server1:/aix_conv/ORCL_COPY

$ scp initORCL.ora AIX_Server:/aix_conv/ORCL_COPY

5. Get a count of data files on the source server Solaris_Server1 for database ORCL_COPY

SQL> spool ORCL_data_file_count.log

SQL> select count(*) from dba_data_files;

SQL> spool off

6. Shutdown the database ORCL_COPY on Solaris_Server.

SQL> select instance_name from v$instance;

INSTANCE_NAME
-------------
ORCL_COPY

SQL> shutdown immediate;

7. Change back the init parameters and copy the init.ora file of ORCL_COPY to AIX_Server1:/backup

Delete below entries from the init.ora file:
db_cache_size =1000M
pga_aggregate_target =2000M
shared_pool_size =1000M

Uncomment below parameters:
db_cache_size =4000M
pga_aggregate_target =9000M
shared_pool_size =5000M

$ scp initORCL_COPY1.ora AIX_Server1:/backup/ORCL_COPY

8. Comment out any ORCL_COPY entries in oratab file to avoid future auto startup of the database incase server is rebooted.

$cd /var/opt/oracle

And comment out below entry:

ORCL_COPY:/optware/oracle/10.2.0.4:Y

9. Transfer the files from Solaris_Server1 to AIX_Server1. Make directories with 50Gb each in size within the filesystems to have the datafiles to be copied in multiple streams and to avoid file too large errors on the unix size while copying. :

An Example of break-up of files and how to copy:
/u01=171GB db=57.3 GB db1=57.9 GB db2=56.3 GB
$ cd /aix_conv/ORCL_COPY/u01
$ mkdir db db1 db2

Directory structure under /aix_conv/ORCL_COPY/u01
Ex: /aix_conv/ORCL_COPY/u01/db
/db1
/db2

$ mv *hrlarge.0*dbf db; mv *hrlarge.12.dbf db; mv *hrlarge.13.dbf db
$ mv *hrlarge*.dbf db1; mv *usrtblspc*dbf db1;mv srtr1.pttbl.01.dbf db1
$ mv *.dbf db2

cd /aix_conv/ORCL_COPY/u01/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &

cd /aix_conv/ORCL_COPY/u01/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &

cd /aix_conv/ORCL_COPY/u01/db2
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &


/u03=169.8 GB db=57.3 GB db1= 56.6 GB db2= 55.8 GB
$ cd /aix_conv/ORCL_COPY/u03
$ mkdir db db1 db2

$ mv *hrlarge.0*dbf db; mv *hrlarge.10.dbf db; mv *hrlarge.11.dbf db; mv *hrlarge.14.dbf db
$ mv *hrlarge*.dbf db1; mv *ewlarge.0* db1;
$ mv *.dbf db2; mv db1/srt1.hrlarge.27.dbf db2;

cd /backup/ORCL_COPY/u03/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &

cd /backup/ORCL_COPY/u03/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &

cd /backup/ORCL_COPY/u03/db2
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &


/u02=4.6 GB
cd /aix_conv/ORCL_COPY/u02
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &

/u06=4.2 GB
cd /aix_conv/ORCL_COPY/u06
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /backup/u09 ;zcat | tar xvf -" &

Stop here, once /u02 and /u06 finish, start /02 and /04 just to avoid any
performance impact on source database due these copy file sessions when you have
both source and copy of the source on the same server

/u12=106.9 GB db=52.8 GB db1=54.1 GB
$ cd /aix_conv/ORCL_COPY/u02
$ mkdir db db1

$ mv *hrindex.0*dbf db; mv *hrindex.11.dbf db; mv *hrindex.13.dbf db; mv *psindex.15.dbf db; mv *hrindex.17.dbf db
$ mv *.dbf db1

cd /aix_conv/ORCL_COPY/u02/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &

cd /aix_conv/ORCL_COPY/u02/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &

/u14=92.69 GB db=48.12 GB db1=44.56 GB
$ cd /aix_conv/ORCL_COPY/u04
$ mkdir db db1
$ mv *.0* db; mv *.1* db
$ mv *.dbf db1


cd /aix_conv/ORCL_COPY/u04/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u04 ;zcat | tar xvf -" &

cd /aix_conv/ORCL_COPY/u04/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u04 ;zcat | tar xvf -" &



10. Also make sure that you have copied the correct control file trace generated and init.ora files to /backup on AIX_Server1
Locations of the files on Solaris_Server1:
a. trace file location: /backup/ORCL_COPY/oradump
b. init.ora file location: /optware/oracle/ORCL_COPY

Check if all datafiles have been copied correctly

Note: After completion of above confirm the transfer of files.

Ways to check the datafiles have been copied correctly

Compare the datafile count of the file systems of ORCL and ORCL_COPY. Run ORCL_datafile_count.ksh and ORCL_COPY_datafile_count.ksh

Ex- cat ORCL_COPY_datafile_count.ksh on SOLARIS

ls -ltr /aix_conv/ORCL_COPY/u01/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u02/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u03/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u04/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u05/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u06/*.dbf | wc –l

cat ORCL_COPY_datafile_count.ksh on AIX

ls -ltr /aix_conv/ORCL_COPY/u01/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u02/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u03/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u04/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u05/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u06/*.dbf | wc –l


Run ps –ef | grep ORCL_COPY on Solaris_Server which will show copy processes since we run them in background.

Conversion on Target:
Bring up the database ORCL_COPY and Convert the datafiles of ORCL_COPY on the target server (AIX_Server1)

Note: Create any missing directories

1. Create and entry for ORCL_COPY in /etc/oratab for the 10.2.0.3_Q108 ORACLE_HOME

ORCL_COPY:/optware/oracle/10.2.0.3_Q108:Y


2. Copy the init.ora file from /backup to/ dbms/oracle/local/ORCL_COPY/etc

$ cp initORCL_COPY.ora /dbms/oracle/local/ORCL_COPY/etc

3. Go to /dbms/oracle/local/ORCL_COPY/etc and
Edit the following in init.ora for ORCL_COPY located in /dbms/oracle/local/ORCL_COPY/etc to reflect the correct file systems.

AUDIT_FILE_DEST ='/aix_conv/ORCL_COPY/oradump/adump'
background_dump_dest='/aix_conv/ORCL_COPY/oradump/bdump'
core_dump_dest='/aix_conv/ORCL_COPY/oradump/cdump'
user_dump_dest='/aix_conv/ORCL_COPY/oradump/udump'
control_files='/aix_conv/ORCL_COPY/u01/SRTR1.control.01.ctl',
'/aix_conv/ORCL_COPY/u02/SRTR1.control.02.ctl',
'/aix_conv/ORCL_COPY/u03/SRTR1.control.03.ctl';
db_name=’ORCL_COPY’
log_archive_dest_1='LOCATION=/aix_conv/ORCL_COPY/archive'
log_archive_format='ORCL_COPY_%s_%t_%r.arc'
service_names=' ORCL_COPY.world',' ORCL_COPY'

4. Run the env file in /dbms/oracle/local/ORCL_COPY/etc to set up environment for ORCL_COPY.

oracle@AIX_Server1[ORCL_COPY] /dbms/oracle/local/ORCL_COPY/etc > . ORCL_COPY.env


5. Make sure the environment is set correctly for SRTR1. Use below

$echo $ORACLE_SID

$echo $ORACLE_HOME

$echo $PATH

6. Create a link in $ORACLE_HOME/dbs for init.ora

$ cd $ORACLE_HOME/dbs
$ ln -s /dbms/oracle/local/ORCL_COPY/etc/initORCL_COPY.ora initORCL_COPY.ora

7. Make a copy of the trace file under /backup to /optware/oracle/work/UPGRADE_PROD and rename it from xxxx.trc to controlfile_create_SRTR1.sql
$ cp ORCL_COPY_ora_22678.trc /optware/oracle/work/UPGRADE_PROD
$ cd /optware/oracle/work/UPGRADE_PROD
$ mv xxxxxx.trc create_controlfile_ORCL_COPY.sql

8. Modify the create_controlfile_ORCL_COPY.sql under /optware/oracle/work/UPGRADE to reflect the correct paths for data files and the correct SID name like below

a. Modify “REUSE” to “SET” in the control file trace and
b. /aix_conv/ORCL_COPY/u04/*.dbf to /aix_conv/ORCL_COPY/u04/*.dbf
c. /aix_conv/ORCL_COPY/u06/*.dbf to /aix_conv/ORCL_COPY/u06/*.dbf


9. Check if all the datafiles are in correct location where the database needs to be staged. Cd to /optware/oracle/work/upgrade. Log on to sqlplus and Startup nomount the database
$cd /optware/oracle/work/UPGRADE
$ sqlplus
SQL> startup nomount

10. Run the control file create sql

SQL> @controlfile_create_ORCL_COPY.sql

Note: Check for any errors in the alert log while creating the control files and resolve them before proceeding with next step.

11. Open the database with reset logs option

SQL> alter database open resetlogs;

12. Add a temp file to the temporary tablespace.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/aix_conv/ORCL_COPY/ORCL_COPY.temp.10.dbf' SIZE 2000M REUSE AUTOEXTEND OFF;

13. Check list of platforms that we can convert source database into as below

SQL> SELECT * FROM v$db_transportable_platform;

14. Open the database ORCL_COPY in READ ONLY MODE;

SQL> shutdown immediate

SQL> startup mount

SQL> alter database open read only;


14. Make sure that the database in READ ONLY mode

SQL> select NAME,OPEN_MODE from v$database;

NAME OPEN_MODE
--------- ----------
ORCL_COPY READ ONLY

SQL> exit


13. Create a directory named convertdb under /optware/oracle/10.2.0.3_Q108/dbs

$cd $ORACLE_HOME/dbs

$mkdir convertdb

15. Open another session and monitor the alert log to see any error during conversion

16. At this point the RMAN convert database can be used:
$ cd /optware/oracle/10.2.0.3_Q108/
$ rman

--Connect to the target
RMAN> connect target;

--the below converts the datafiles under /aix_conv/ORCL_COPY/uXX to AIX format
and places them under /uXX/oradata/ORCL/. It also creates transport script
in /optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript.

RMAN> CONVERT DATABASE NEW DATABASE 'ORCL_NEW'
transport script '/optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript'
to platform 'AIX-Based Systems (64-bit)'
db_file_name_convert
'/aix_conv/ORCL_COPY/u01' '/u01/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u02' '/u02/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u03' '/u03/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u04' '/u04/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u05' '/u05/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u06' '/u06/oradata/ORCL_NEW' ;


Note: After successful conversion of the data files success messages are prompted and the converted data files are created under uxx/oradata/ORCL/db. The conversion should take approximately 6 hours.

The conversion success message at the end of the conversion is as the sample below

************************************************************
input datafile fno=00101 name=/aix_conv/ORCL_COPY/u02/ORCL_COPY.hrtrpts.01.dbf
converted datafile=/u02/oradata/ORCL_NEW/ORCL_NEW.hrtrpts.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00102 name=/aix_conv/ORCL_COPY/u01/ORCL_COPY.hrwlarge.01.dbf
converted datafile=/u01/oradata/ORCL_NEW/ORCL_NEW.hrwlarge.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00103 name==/aix_conv/ORCL_COPY/u03/oradata/ORCL/ORCL.pcapp.01.dbf
converted datafile=/u03/oradata/ORCL_NEW/ORCL_NEW.pcapp.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00104 name==/aix_conv/ORCL_COPY/u01/SRTR1.bplarge.01.dbf
converted datafile=/u01/oradata/ORCL_NEW/ORCL_NEW.hplarge.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Run SQL script /optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript on the target platform to create database
Edit init.ora file /optware/oracle/10.2.0.3_Q108/dbs/init_00k7onuv_1_0.ora. This PFILE will be used to create the database on the target platform
To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the target platform
To change the internal database identifier, use DBNEWID Utility
Finished backup at 19-FEB-09



15. After successful completion of RMAN conversion exit out.
RMAN > exit

Bring up database ORCL_NEW:

1. A new control file creation script “transportscript” is created under /optware/oracle/10.2.0.3_Q108/dbs/convertdb



2. Rename the datafiles from ORCL to ORCL_NEW by executing the script rename_files_from_ORCL_COPY_to_ORCL_NEW.ksh located in /optware/oracle/work/UPGRADE_PROD

3. Point the Oracle Home of ORCL_NEW to 10.2.0.3_Q108 in /etc/oratab like below.

ORCL_NEW:/optware/oracle/10.2.0.3_Q108_dbv:Y

4. Run the env file located in /dbms/oracle/local/ORCL_NEW/etc.

$ /dbms/oracle/local/ORCL_NEW/etc > . ORCL_NEW.env


5. Make sure the environment is set correctly for ORCL_NEW. Use below

$echo $ORACLE_SID

$echo $ORACLE_HOME

$echo $PATH


6. Go to $ORACLE_HOME/dbs and create a soft link for initORCL_NEW.ora in $ORACLE_HOME/dbs to point to /dbms/oracle/local/ORCL_NEW/etc. Use below

ln -s /dbms/oracle/local/ORCL_NEW/etc/initORCL_NEW.ora initORCL_NEW.ora

7. Make changes to initORCL_NEW.ora to reflect

AUDIT_FILE_DEST =' /u01/oradump/ORCL_NEW/adump'
background_dump_dest=' /u01/oradump/ORCL_NEW/bdump'
core_dump_dest=' /u01/oradump/ORCL_NEW/cdump'
user_dump_dest=' /u01/oradump/ORCL_NEW/udump'
control_files ='/u01/oradata/ORCL_NEW/ORCL_NEW.control.01.ctl',
'/u02/oradata/ORCL_NEW/ORCL_NEW.control.02.ctl',
'/u03/oradata/ORCL_NEW/ORCL_NEW.control.03.ctl'
db_name=’ORCL_NEW’
log_archive_dest_1='LOCATION=/u07/oradata/ORCL_NEW'
undo_tablespace ='HRRBS'
log_archive_format='ORCL_NEW_%s_%t_%r.arc'
service_names='ORCL_NEW.world','ORCL_NEW'


08. Make a copy of create_controlfile_SRTR1.sql in /optware/oracle/work/UPGRADE_PROD to create_controlfile_ORCL_NEW.sql (RECOMMENDED)

$/optware/oracle/work/UPGRADE_PROD>cp create_controlfile_ORCL_NEW.sql create_controlfile_ORCL_NEW.sql


OR use the transport script to create a control file…

*************************
Modify "transportscript" script to reflect the converted data files location, intiORCL_NEW.ora location and the location for utlxxx scripts as per the ORACLE_HOME. Also make other any other necessary changes.

*********************

09. Modify the create_controlfile_GDPFPRG.sql to reflect the following:

ORCL_COPY to ORCL_NEW


Redo log locations as follows since I am using raw file systems here :

GROUP 1 (
'/dev/roraORCL_NEWr1_01',
'/dev/roraORCL_NEWr2_01'
) SIZE 495M,
GROUP 2 (
'/dev/roraORCL_NEWr1_02',
'/dev/roraORCL_NEWr2_02'
) SIZE 495M,
GROUP 3 (
'/dev/roraORCL_NEWr1_03',
'/dev/roraORCL_NEWr2_03'
) SIZE 495M,
GROUP 4 (
'/dev/roraORCL_NEWr1_04',
'/dev/roraORCL_NEWr2_04'
) SIZE 495M,
GROUP 5 (
'/dev/roraORCL_NEWr1_05',
'/dev/roraORCL_NEWr2_05'
) SIZE 495M

Modify the datafile locations as below

'/aix_conv/ORCL_COPY/u01' to '/u01/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u02' to '/u02/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u03' to '/u03/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u04' to '/u04/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u05' to '/u05/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u06' to '/u06/oradata/ORCL_NEW';



10. Open another session and monitor alert log for ORCL_NEW in /u01/oradump/ORCL_NEW/bdump

11. cd to /optware/oracle/work/UPGRADE_PROD directory Run the create_controfile_ORCL_NEW.sql script in your work directory

Ex: $SQLPLUS

SQL> startup nomount

SQL>@/optware/oracle/work/UPGRADE_PROD/create_controlfile_ORCL_NEW.sql

SQL> alter database open resetlogs;

12. Add temp file to PSTEMP tablespace


SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/oradata/ORCL_NEW/ORCL_NEW.temp.10.dbf' SIZE 2000M REUSE AUTOEXTEND OFF;

13. Shutdown and startup just to see if any errors in the alert log. Closely monitor the alert log during this

SQL> shutdown immediate

SQL> startup

14. Run utlirp.sql by shutting down and startup in upgrade mode.

SQL> shutdown immediate

SQL> startup upgrade;

SQL> @?/rdbms/admin/utlirp.sql

15. Run utlrp.sql after shutting down the database

SQL> shutdown immediate

SQL> startup

SQL> @ ?/rdbms/admin/utlrp.sql


16. Drop the OLD temporary tablespace TEMP

SQL> drop tablespace TEMP including datafiles and contents cascade constraints;

SQL> drop tablespace pstemp including contents and datafiles;


17. Create temporary tablespace NEW_TEMP

SQL> CREATE TEMPORARY TABLESPACE "NEW_TEMP" TEMPFILE '/dev/roraORCL_NEWt_001' SIZE 3990M REUSE ;

18. Go to /optware/oracle/work/UPGRADE_PROD and run add_temp_files_ORCL_NEW.sql to add rest of the temp files as needed

SQL> spool add_temp_files_ORCL_NEW.log

SQL> add_temp_files_ORCL_NEW.sql

20. Create block changing data file to enable incremental backups of the database

SQL> alter database enable block change tracking using file '/u13/oradata/ORCL_NEW/ORCL_NEW.bct.01.dbf' reuse;

21. Update the PS OWNER table if you database is a PeopleSoft Database in my case it is.

SQL> select * from psdbowner;

DBNAME OWNERID
-------- --------
ORCL SYSADM

SQL> update psdbowner set dbname='ORCL_NEW';

Sql> commit;

SQL> select * from psdbowner;

DBNAME OWNERID
-------- --------
ORCL_NEW SYSADM

24. Configure and bring up the listener.

What are Redo Logs, Size estimation and How to Resize and/or Add Redo logs

What are Redo Logs?
Each Oracle database has atleast one redo log. This redo log records all changes made in datafiles and allows to replay the SQL Statements. Changes are written ti Redo Logs before they are written to datafiles. In case we need to restore a data file to the time before a change was done on the database we can use the redo information stored in Redo Logs to restore the data file.

How estimate the correct size Redo Logs?
The best way to estimate the size of Redo Logs is to check in alert log and see how frequently log switch occurs and the optimal switch time should be between 20-30 minutes to get better database performance. Oracle recommends a minimum redo log size of 100MB for any production database running 8.1.7 and higher releases of Oracle.

Check in the alert log and see for messages about how frequently the log sequence# are changing, log check pointing and any incomplete redo log switch completion messages. When you examine your alert.log for log switches, you should do it when there are heavier loads in the database. Below is a sample snapshot of redo log switches in alert log:

Fri Jan 27 09:57:40 2004
ARC0: Evaluating archive log 1 thread 1 sequence 174
ARC0: Beginning to archive log 1 thread 1 sequence 174
Creating archive destination LOG_ARCHIVE_DEST_1:
'/u02/app/oracle/product/9.2.0/dbs/arch/arc_1_174.src'
ARC0: Completed archiving log 1 thread 1 sequence 174
Fri Jan 27 09:57:52 2004
Thread 1 advanced to log sequence 176
Current log# 3 seq# 176 mem# 0: /u02/oradata/redo/redo03.log
Fri Jan 27 09:57:52 2004
ARC1: Evaluating archive log 2 thread 1 sequence 175
ARC1: Beginning to archive log 2 thread 1 sequence 175
Creating archive destination LOG_ARCHIVE_DEST_1:
'/u02/app/oracle/product/9.2.0/dbs/arch/arc_1_175.src'
ARC1: Completed archiving log 2 thread 1 sequence 175
Fri Jan 27 09:58:07 2004

The above snapshot from alert log file shows that the current redo logs are not big enough.

Since the oracle recommended optimal log switch time is between 20-30minutes and if the log switch is occuring more frequently please check the v$log view for the current of redo logs and based on how frequently it is switching on an average, you will need to create redo log groups with bigger size and then switch the redo log a few times with “ALTER SYSTEM SWITCH LOGFILE;” so that the CURRENT redo log is one of the bigger redo log file and then drop the redo log groups with smaller size with "ALTER DATABASE DROP LOGFILE...." command.

The default redo log filesize on newer versions of oracle is 100MB by default and oracle recommends to spread the redo logs over multiple physical devices for optimal redo log switch performance.

Please keep in mind that too frequent checkpoints and log file switch completion problems will affect database performance. If there are waits on log file switch completion, you will need to add one or more redo log groups also to resolve this wait event in the Statspack report.


How To Resize and/or Add Redo Logs?

Note: Below steps are applicable from versions 9.2.0.1 and up

1. Review information on existing redo logs.

SQL> SELECT a.group#, b.member, a.status, a.bytes
FROM v$log a, v$logfile b
WHERE a.group#=b.group#;


2. Add new groups

SQL> ALTER DATABASE ADD LOGFILE group 4 ('/log01A.dbf', '/log01B.dbf ') SIZE 512M;
SQL> ALTER DATABASE ADD LOGFILE group 5 ('/log02A.dbf', '/log02B.dbf ') SIZE 512M;
SQL> ALTER DATABASE ADD LOGFILE group 6 ('/log03A.dbf', '/log03B.dbf ') SIZE 512M;


3. Check the status on all redo logs again.

SQL> SELECT a.group#, b.member, a.status, a.bytes FROM v$log a, v$logfile b
WHERE a.group#=b.group#4;


Drop the online redo log groups that are not needed. You must have the ALTER DATABASE system privilege.

Note: Before dropping an online redo log group, consider the following restrictions and precautions:

a. An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)

b. You can drop an online redo log group only if it is INACTIVE. If you need to drop the current group, first force a log switch to occur.

c. Make sure an online redo log group is archived (if archiving is enabled) before dropping it. This can be determined by:

GROUP# ARC STATUS
--------- --- ----------------
1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES UNUSED
5 YES UNUSED
6 YES UNUSED

d.Check that the group is inactive and archived before dropping it .

SQL> SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;

SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
e. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files. For more information
refer to Note 395062.1


How to determine the optimal size for redo logs (revisited)?

You can use the V$INSTANCE_RECOVERY view column OPTIMAL_LOGFILE_SIZE to determine the size of your online redo logs. This field shows the redo log file size in megabytes that is considered optimal based on the current setting of FAST_START_MTTR_TARGET. If this field consistently shows a value greater than the size of your smallest online log, then you should configure all your online logs to be at least this size.

Note: The redo log file size affects the MTTR. In some cases, you may be able to refine your choice of the optimal FAST_START_MTTR_TARGET value by re-running the MTTR Advisor with your suggested optimal log file size.


Note: Some of the information presented in this post are in reference to Oracle Metalink notes: 1038851.6, 602066.1 and some based on knowledge. And the information was presented referring to people who do not have access to Oracle Metalink.