Oracle Resource Manager (ORM)
Goal of ORM:
To give Oracle Database Server more control over resource management decisions, thus preventing problems resulting from inefficient operating system management.
Problems addressed by ORM:• Excessive overhead
• Inefficient scheduling
• Inappropriate allocation of resources
• Inability to manage database-specific resources, such as parallel execution servers and active sessions
How above problems can be addressed:
• Guarantee specific users with certain amount of processing resources irrespective of the load on the system and the no. of users
Ex: HR_BATCH_GROUP – 50% CPU will be allocated for Batch processing jobs irrespective of the load on the system
• Distribute available processing resources by allocating percentages of CPU time to different users and applications. In a data warehouse, a higher percentage may be given to ROLAP (relational on-line analytical processing) applications than to batch jobs.
Ex: CPU Resources distribution for HR:
HR_BATCH_GROUP – 50% CPU
HR_WF_REPT_AUD_METADATA_GROUP – 20% CPU
HR_ADHOC_HRLINK_WF_GROUP – 15% CPU
HR_ADHOC_BUS_USER_GROUP – 5% CPU
HR_STEAM_DBLINK_USERS_GROUP – 5% CPU
OTHER_GROUPS – 5% CPU
• Limit to 10 concurrent sessions for HR_ADHOC_BUS_USER_GROUP so that members of the group performing any operation are limited to maximum of 10 sessions within the group. Additional sessions beyond the maximum are queued for execution, but you can specify a timeout period, after which queued jobs will terminate. Prevent the execution of operations that the optimizer estimates will run for a longer time than a specified limit
Ex: The parameter setting for this is ACTIVE_SESS_POOL_P1 => 10
Ex: The parameter setting for this is QUEUEING_P1 => 60
Ex: The parameter setting for this is MAX_EST_EXEC_TIME => 2700 (45 minutes)
• (not doing in below example) Allow automatic switching of users from one group to another group based on administrator defined criteria. If a member of a particular group of users creates a session that executes for longer than a specified amount of time, that session can be automatically switched to another group of users with different resource requirements.
• (not doing in below example) Create an undo pool. This pool consists of the amount of undo space that can be consumed in by a group of users.
• (not doing in below example) Limit the amount of time that a session can be idle. This can be further defined to mean only sessions that are blocking other sessions.
• (not doing in below example) Configure an instance to use a particular method of allocating resources. You can dynamically change the method, for example, from a daytime setup to a nighttime setup, without having to shut down and restart the instance.
• (not doing in below example) Allow the cancellation of long-running SQL statements and the termination of long-running sessions.
Terminology:
Resource Plan: A resource plan is a way to group a set of resource consumer groups together and specify how resources should be divided among them
Resource consumer groups: Allow the administrator to group user sessions together by resource requirements
Resource allocation methods: determine what policy to use when allocating for any particular resource. Resource allocation methods are used by resource plans and resource consumer groups
Resource plan directives: are a means of assigning consumer groups to particular plans and partitioning resources among consumer groups by specifying parameters for each resource allocation method
Subplans: allow further subdivision of resources among different users of an application
Levels: provide a mechanism to specify distribution of unused resources among available users. Up to eight levels of resource allocation can be specified
Steps for implementing ORM:
1. Execute the HR_PLAN.sql and check for any errors. HR_PLAN.sql is an example of how an ORM Plan can be created.
2. Do the following once ORM has been installed succesfully without any errors
$ sqlplus "/ as sysdba"
a. SQL> 'ALTER SYSTEM SET RESOURCE_MANAGER_PLAN ='HR_PLAN';
b. Set the init parameter RESOURCE_MANAGER_PLAN = HR_PLAN in database init parameter file"
Note: Both the above steps are required to make sure the created plan is active both dynamically and permanent
to ensure that if database is bounced the plan is still active in case database cannot be bounced now
HR_PLAN:
BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'HR_PLAN', COMMENT => 'HR ORM Plan');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_BATCH_GROUP', COMMENT => 'HR Batch group');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_WF_REPT_AUD_METADATA_GROUP', COMMENT => 'Reporting group');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_ADHOC_HRLINK_WF_GROUP', COMMENT => 'Ad-hoc WF HRLink group');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_ADHOC_BUS_USER_GROUP', COMMENT => 'Ad-hoc Business User group');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'HR_STEAM_DBLINK_USERS_GROUP', COMMENT => 'Steam DBLink User group');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_BATCH_GROUP', COMMENT => 'Batch Process', CPU_P1 => 50);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_WF_REPT_AUD_METADATA_GROUP', COMMENT => 'Reporting Process', CPU_P1 =
> 20);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_ADHOC_HRLINK_WF_GROUP', COMMENT => 'Ad-hoc WF HRLink group', CPU_P1 =
> 15);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_ADHOC_BUS_USER_GROUP', COMMENT => 'Ad-hoc Business User group', CPU_P
1 => 5, MAX_EST_EXEC_TIME => 2700 , ACTIVE_SESS_POOL_P1 => 10, QUEUEING_P1 => 60);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN', GROUP_OR_SUBPLAN => 'HR_STEAM_DBLINK_USERS_GROUP', COMMENT => 'Steam User group', CPU_P1 => 5
);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'HR_PLAN',GROUP_OR_SUBPLAN => 'OTHER_GROUPS', COMMENT => 'mandatory', CPU_P1 => 5);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('SYSADM', 'HR_BATCH_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('SYSADM', 'HR_BATCH_GROUP');
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('WFLINK', 'HR_ADHOC_HRLINK_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('WFLINK', 'HR_ADHOC_HRLINK_WF_GROUP');
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('BUS_AH32366', 'HR_ADHOC_BUS_USER_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('BUS_AH32366', 'HR_ADHOC_BUS_USER_GROUP');
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('HR_LINK_APAC','HR_ST_DBLINK_USERS_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('HR_LINK_APAC','HR_STEAM_DBLINK_USERS_GROUP');
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP('CSBOUTA1','HR_REPT_AUD_METADATA_GROUP', TRUE);
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP('CSBOUTA1','HR_WF_REPT_AUD_METADATA_GROUP');
END;
/
Steps for enabling ORM:
There are two ways for enabling ORM after you create your custom ORM Plan in the database. Below are the steps:
1. Enable it dynamically from a SQLPLUS Session
Ex: SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=’HR_PLAN’;
2. Enable it permanently in init.ora if you are using pfile
Ex: In init file add the parameter – RESOURCE_MANAGER_PLAN=’HR_PLAN’;
Both the above steps are recommended as database needs to be bounced if you update init file and if that cannot be done immediately Step 1 will take care of enabling the Resource Manager immediately and even if the database is bounced Step 2 will take care of enabling it Resource Manager automatically.
Steps for disabling ORM:There are two ways for enabling ORM after you create your custom ORM Plan in the database. Below are the steps:
1. Disable it dynamically from a SQLPLUS Session
Ex: SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN=’ ’; (space in between)
2. Disable it permanently in init.ora if you are using pfile
Ex: In init file comment out or remove the parameter – RESOURCE_MANAGER_PLAN=’HR_PLAN’;
ORM FAQ's:
Q: How does the Resource Manager work in a multi-instance server?
A: The resource manager cannot manage resources across instances, only within an instance. It would not be useful in multi-instance on a single server situation. If you are having more than one instance, you must use OS tools such as taking a big honking Sun E10k and setting up separate domains (so all of a sudden you don't have more than one instance per "server")
Q: Overhead of using "Oracle Resource Manager"?
A: The implementation of a resource plan is going to take some resources itself (of course). You do not quantify at all what "non-insignificant" is - so I will not address that (everyone has a different definition for that...). so it is not really possible to comment.
Note: The above question has been taken from www.asktom.oracle.com website
Q: When does "Oracle Resource Manager" (ORM) kick in?
A: Resource manager basically does nothing until a resource is totally maxed out (you cannot put CPU in the bank and use it later). So, even if you say "100% to group A", unless group A actually needs to consume 100%. Resource manager is not a "fence", but rather an attempt to assure you that if group A makes a demand for X%, it will be given to them.
Q: Will there be any degradation in response time or execution time of jobs than the regular taken time?
A: Yes, when the server is busy and is at 100% CPU or more the jobs will take a long time to finish than what they might be taking regularly. But the response time will almost be the same.
Q. What is the overhead after implementing ORM?
A: There was no specific answer for what is the overhead of using ORM resource regulation. So, I did a test to find out the results of which are presented here but there is minimal to no impact while the server is not 100% busy and there might very little overhead when the server is 100% which hasn't been quantified yet but I did some estimation based on some tests.
Reference: Based on information read from Oracle Documentation and other online websites
Wednesday, March 31, 2010
AIX commands that everyone needs
Kernel Command:
How to know if you are running a 32-bit kernel or 64-bit kernel?
To display the info at the unix prompt type:
$ bootinfo -K
Hardware Commands:
How much real memory does a unix server have?
$ bootinfo -r
$ lsattr -El sys0 -a realmem
The above commands display memory info in KB.
How to list attributes of various devices on your system?
To list current values of the attributes for tape device, rmt0 type:
$ lsattr -l rmt0 -E
To list default values of the attributes for tape device, rmt0 type:
$ lsattr -l rmt0 -D
To display system level attributes type:
$ lsattr -E -l sys0
To display processors info on a unix system type:
$ lscfg | grep proc
To display hard disks information and how many of them are in use type:
$ lspv
To list information about a specific physical volume ex: hdisk1 type:
$ lspv hdisk1
To get a detail configuration of a unix system type:
$ lscfg
The following options provide specific information:
-p: Displays platform-specific device information. The flag is applicable to AIX 4.2.1 or later.
-v: Displays the VPD (Vital Product Database) found in the customized VPD object class.
For example, to display details about the tape drive, rmt0, type:
$ lscfg -vl rmt0
You can obtain similar information using prtconf command.
To find out about chip type, system name, node name, model number and etc use uname command with various options to get details about unix system. Some example are listed below:
uname -p : Displays the chip type of the system. For example, PowerPC.
uname -r : Displays the release number of the operating system.
uname -s : Displays the system name. For example, AIX.
uname -n : Displays the name of the node.
uname -a : Displays the system name, nodename, version, machine ID.
uname -M : Displays the system model name. For example, IBM, 9114-275.
uname -v : Displays the operating system version.
uname -m : Displays the machine ID number of the hardware running the system.
uname -u : Displays the system ID number.
Some commands related to AIX:
To display version, release and maintenance level of AIX running on your system type:
$ oslevel -r
(or)
$ lslpp -h bos.rte
To see information about what service pack is installed on your system type:
$ oslevel -s
To create a file system:
The below command will create, within volume group testvg, a jfs filesystem of 10MB with mount point /fs1:
$ crfs -v jfs -g testvg -a size=10M -m /fs1
The below command will create, within volume group testvg, a jfs2 filesystem of 10MB with mount point /fs2 having read only permissions:
$ crfs -v jfs2 -g testvg -a size=10M -p ro -m /fs2 Note: In AIX 5.3 this jfs2 file system can be shrunk as well.
To change the size of a file system: Ex: To increase /usr file system by 1000000 512-byte blocks type:
$ chfs -a size=+1000000 /usr
To mount a CD:
$ mount -V cdrfs -o ro /dev/cd0 /cdrom
To mount a file system:
$ mount /dev/fslv02 /test
To mount all default file systems (all standard file systems in the /etc/filesystems file marked by the mount=true attribute):
$ mount {-a|all}
To unmount a file system: Ex: To unmount /test file system
$ unmount /test
To display mounted file systems:
$ mount
To remove a file system: Ex: To remove /test file system
$ rmfs /test
To defrag a file system:
$ defragfs /home
To get installed filesets information:
$ lslpp -l
To determince of all required maintenance level file systems are installed:
$ instfix -i | grep ML
To determince if a particular fix is installed on a file system:
$ instfix -ik IY24043
To determince if filesets have required pre-requisites and are completely installed:
$ lppchk -v
To determince amount of paging space allocated and in use:
$ lsps -a
Volume groups and logical volumes:
To create a volume group:
$ mkvg -y name_of_volume_group -s partition_size list_of_hard_disks
To change characteristics of a volume group
$ chvg
To create a logical volume:
$ mklv -y name_of_logical_volume name_of_volume_group number_of_partition
To increase the size of a logical volume:
$ extendlv lv05 3
To display all logical volumes that are part of a volume group: Ex: rootvg
$ lsvg -l rootvg
To list information about a logical volume:
$ lslv lv1
To remove a logical volume:
$ rmlv lv7
To show volume groups in the system, type:
$ lsvg
To show all the characteristics of rootvg, type:
$ lsvg rootvg
To show disks used by rootvg, type:
$ lsvg -p rootvg
Network Commands:
To get the IP address of a system:
$ ifconfig -a
To identify the network interfaces of a system:
$ lsdev -Cc if
(or)
$ ifconfig -a
To get information about a specific network interface ex: tr0:
$ ifconfig tr0
To activate network interface in a system: Ex: tr0
$ ifconfig tr0 up
To deactivate network interface in a system: Ex: tr0
$ ifconfig tr0 down
Process Commands:
To get information on what OS processes are running type:
$ ps -ef | grep
To get information about particular OS level processes type for example about all oracle processes:
$ ps -ef | grep ora
File System commands:
To get information about your current working directory:
$ pwd
To create a file in your current directory for example test type:
$ touch test
To change permissions on a file chmod is the command to be used:
$ chmod 777 test
The above command gives read, write and execute permissions to all users in the system.
To get more on chmod permission codes based on need check:
http://mistupid.com/internet/chmod.htm
To rename a file without creating a duplicate copy of file test:
$ mv test test.sh
The above command renames "test" file to "test.sh"
To rename a file and make a duplicate copy of the original file test:
$ cp test test.sh
The above command renames "test" file to "test.sh" as a duplicate copy to original file "test"
To compress a file:
$ compress test
To copy and compress a file at the same time:
$ gzip < /path/to/file1 > /path/to/file1.gz
To copy and uncompress a file at the same time:
$ gunzip < /path/to/file1.gz > /path/to/file1
To copy and compress a folder at the same time:
$ tar cf - | gzip -c - > /.tar.gz
To uncompress a folder: cd to the directory you want the files extracted in and run:
$ gunzip -c/.tar.gz | tar xf -
How to know if you are running a 32-bit kernel or 64-bit kernel?
To display the info at the unix prompt type:
$ bootinfo -K
Hardware Commands:
How much real memory does a unix server have?
$ bootinfo -r
$ lsattr -El sys0 -a realmem
The above commands display memory info in KB.
How to list attributes of various devices on your system?
To list current values of the attributes for tape device, rmt0 type:
$ lsattr -l rmt0 -E
To list default values of the attributes for tape device, rmt0 type:
$ lsattr -l rmt0 -D
To display system level attributes type:
$ lsattr -E -l sys0
To display processors info on a unix system type:
$ lscfg | grep proc
To display hard disks information and how many of them are in use type:
$ lspv
To list information about a specific physical volume ex: hdisk1 type:
$ lspv hdisk1
To get a detail configuration of a unix system type:
$ lscfg
The following options provide specific information:
-p: Displays platform-specific device information. The flag is applicable to AIX 4.2.1 or later.
-v: Displays the VPD (Vital Product Database) found in the customized VPD object class.
For example, to display details about the tape drive, rmt0, type:
$ lscfg -vl rmt0
You can obtain similar information using prtconf command.
To find out about chip type, system name, node name, model number and etc use uname command with various options to get details about unix system. Some example are listed below:
uname -p : Displays the chip type of the system. For example, PowerPC.
uname -r : Displays the release number of the operating system.
uname -s : Displays the system name. For example, AIX.
uname -n : Displays the name of the node.
uname -a : Displays the system name, nodename, version, machine ID.
uname -M : Displays the system model name. For example, IBM, 9114-275.
uname -v : Displays the operating system version.
uname -m : Displays the machine ID number of the hardware running the system.
uname -u : Displays the system ID number.
Some commands related to AIX:
To display version, release and maintenance level of AIX running on your system type:
$ oslevel -r
(or)
$ lslpp -h bos.rte
To see information about what service pack is installed on your system type:
$ oslevel -s
To create a file system:
The below command will create, within volume group testvg, a jfs filesystem of 10MB with mount point /fs1:
$ crfs -v jfs -g testvg -a size=10M -m /fs1
The below command will create, within volume group testvg, a jfs2 filesystem of 10MB with mount point /fs2 having read only permissions:
$ crfs -v jfs2 -g testvg -a size=10M -p ro -m /fs2 Note: In AIX 5.3 this jfs2 file system can be shrunk as well.
To change the size of a file system: Ex: To increase /usr file system by 1000000 512-byte blocks type:
$ chfs -a size=+1000000 /usr
To mount a CD:
$ mount -V cdrfs -o ro /dev/cd0 /cdrom
To mount a file system:
$ mount /dev/fslv02 /test
To mount all default file systems (all standard file systems in the /etc/filesystems file marked by the mount=true attribute):
$ mount {-a|all}
To unmount a file system: Ex: To unmount /test file system
$ unmount /test
To display mounted file systems:
$ mount
To remove a file system: Ex: To remove /test file system
$ rmfs /test
To defrag a file system:
$ defragfs /home
To get installed filesets information:
$ lslpp -l
To determince of all required maintenance level file systems are installed:
$ instfix -i | grep ML
To determince if a particular fix is installed on a file system:
$ instfix -ik IY24043
To determince if filesets have required pre-requisites and are completely installed:
$ lppchk -v
To determince amount of paging space allocated and in use:
$ lsps -a
Volume groups and logical volumes:
To create a volume group:
$ mkvg -y name_of_volume_group -s partition_size list_of_hard_disks
To change characteristics of a volume group
$ chvg
To create a logical volume:
$ mklv -y name_of_logical_volume name_of_volume_group number_of_partition
To increase the size of a logical volume:
$ extendlv lv05 3
To display all logical volumes that are part of a volume group: Ex: rootvg
$ lsvg -l rootvg
To list information about a logical volume:
$ lslv lv1
To remove a logical volume:
$ rmlv lv7
To show volume groups in the system, type:
$ lsvg
To show all the characteristics of rootvg, type:
$ lsvg rootvg
To show disks used by rootvg, type:
$ lsvg -p rootvg
Network Commands:
To get the IP address of a system:
$ ifconfig -a
To identify the network interfaces of a system:
$ lsdev -Cc if
(or)
$ ifconfig -a
To get information about a specific network interface ex: tr0:
$ ifconfig tr0
To activate network interface in a system: Ex: tr0
$ ifconfig tr0 up
To deactivate network interface in a system: Ex: tr0
$ ifconfig tr0 down
Process Commands:
To get information on what OS processes are running type:
$ ps -ef | grep
To get information about particular OS level processes type for example about all oracle processes:
$ ps -ef | grep ora
File System commands:
To get information about your current working directory:
$ pwd
To create a file in your current directory for example test type:
$ touch test
To change permissions on a file chmod is the command to be used:
$ chmod 777 test
The above command gives read, write and execute permissions to all users in the system.
To get more on chmod permission codes based on need check:
http://mistupid.com/internet/chmod.htm
To rename a file without creating a duplicate copy of file test:
$ mv test test.sh
The above command renames "test" file to "test.sh"
To rename a file and make a duplicate copy of the original file test:
$ cp test test.sh
The above command renames "test" file to "test.sh" as a duplicate copy to original file "test"
To compress a file:
$ compress test
To copy and compress a file at the same time:
$ gzip < /path/to/file1 > /path/to/file1.gz
To copy and uncompress a file at the same time:
$ gunzip < /path/to/file1.gz > /path/to/file1
To copy and compress a folder at the same time:
$ tar cf -
To uncompress a folder: cd to the directory you want the files extracted in and run:
$ gunzip -c
How to generate public/private key pair in OpenSSH?
Note: The link provided here is for informational purpose only and to provide a centralized location for some of the basic things that a DBA/Anyone needs to know.
Monday, March 29, 2010
Database Conversion from Solaris to AIX: On Target conversion
Important Note: Before proceeding with database conversion from one platform to another is recommended to make a copy of original database on a file system different from the source database and do the rest of conversion steps. This is to avoid downtime and any performance impacts and corruption issues.
1. Some Pre-Requisites or assumptions of source and target directories etc:
ORCL Directory Structure for datafiles Solaris_Server1 or SOURCE server:
/u01/ORCL/oradata/
/u02/ORCL/oradata/
/u03/ORCL/oradata/
/u04/ORCL/oradata/
/u05/ORCL/oradata/
/u06/ORCL/oradata/
Directory for environment files: /dbms/oracle/local/ORCL/etc
ORCL_COPY Directory Structures for datafiles Solaris_Server1 or SOURCE server:
/aix_conv/ORCL_COPY/u01/
/aix_conv/ORCL_COPY/u02/
/aix_conv/ORCL_COPY/u03/
/aix_conv/ORCL_COPY/u04/
/aix_conv/ORCL_COPY/u05/
/aix_conv/ORCL_COPY/u06/
Directory for environment files: /dbms/oracle/local/ORCL_COPY/etc
ORCL_COPY Directory Structures for datafiles AIX_Server1 or TARGET server:
/aix_conv/ORCL_COPY/u01/
/aix_conv/ORCL_COPY/u02/
/aix_conv/ORCL_COPY/u03/
/aix_conv/ORCL_COPY/u04/
/aix_conv/ORCL_COPY/u05/
/aix_conv/ORCL_COPY/u06/
Directory for environment files: /dbms/oracle/local/ORCL_COPY/etc
ORCL_COPY Directory Structures for datafiles AIX_Server1 or TARGET server:
/u01/ORCL_NEW/oradata/
/u02/ORCL_NEW/oradata/
/u03/ORCL_NEW/oradata/
/u04/ORCL_NEW/oradata/
/u05/ORCL_NEW/oradata/
/u06/ORCL_NEW/oradata/
Directory for environment files: /dbms/oracle/local/ORCL_NEW/etc
2. For ORCL_NEW we will be using raw data files for redo logs and.
Pre-Requisite Checks and prep work to transfer the data files to TARGET Server:
1. Some Pre-Conditions to be made on source(copy) or newly created database ORCL_COPY from ORCL:
I. Check list of platforms that we can convert source database into as below.
SQL> SELECT * FROM v$db_transportable_platform;
II. Shutdown and start newly created ORCL_COPY database in "READ ONLY" mode.
SQL> shutdown immediate;
Comment out below parameters:
audit_trail ='DB'
audit_file_dest='/aix_conv/ORCL_COPY/oradata'
db_cache_size =4000M
pga_aggregate_target =9000M
shared_pool_size =5000M
Add below entries from the init.ora file:
db_cache_size =1000M
pga_aggregate_target =2000M
shared_pool_size =1000M
Note: The above change of values to three parameters are recommended since we
don't need high memory values for a copy of the source database and also
recommended in order to minimize impact on the actual database.
SQL> startup mount;
SQL> alter database open read only;
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- ----------
ORCL_COPY READ ONLY
III.Use the following PL/SQL to ensure that no Internal Conditions exist that would prevent the transport of the database.
SQL> set serveroutput on
declare
db_ready boolean;
begin
db_ready:= dbms_tdb.check_db ('AIX-Based Systems (64-bit)', dbms_tdb.skip_none);
End;
/
Success output should be:
PS/PL SQL executed successfully
In case of an error output:
Note 1: If output error is similar to:
declare
*
ERROR at line 1:
ORA-25153: Temporary Tablespace is Empty
ORA-06512: at "SYS.DBMS_TDB", line 346
ORA-06512: at "SYS.DBMS_TDB", line 397
ORA-06512: at line 4
Resolution:
a. Shutdown the database and startup is READ/WRITE mode
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
b. Add a datafile to TEMP Tablespace and proceed to next step
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE ‘file_location_as_of_your_other_datafiles’ SIZE 500MB;
c. Repeat steps 1 & 2 again and see if you get the “Success output”, if you get it then proceed with next steps
Note 2: If the output error is similar to
Some files in the database have not been converted to the 10.0 compatibility format. Open the database in READ/WRITE mode and alter tablespaces HRAPP, HRLARGE, HRWORK, ............... to READ/WRITE. Then open the database in READ-ONLY mode and retry.
PL/SQL procedure successfully completed.
Resolution:
a. Shutdown the database and startup in READ/WRITE mode
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
b. Take one of the above mentioned tablespaces offline and bring them back online again.
SQL> alter tablespace HRLARGE offline;
SQL> alter tablespace HRLARGE online;
Note: Repeat Step b similarly for all the tablespaces as mentioned in Note2 output before proceeding.
c. Repeat steps 1 & 2 again and see if you get the “Success output”, if you get it proceed with next steps
IV. Execute below PL/SQL to identify any external objects.
SQL> set serveroutput on
declare
external boolean;
begin
external := dbms_tdb.check_external;
end;
/
Success output:
PS/PL SQL executed successfully
In case of an error output:
Note: If output ERROR is something similar to:
The following directories exist in the database:
SYS.DATA_PUMP_DIR
Resolution:
a. Shutdown and startup the database
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
b. Drop the above mentioned directory
SQL> DROP DIRECTORY DATA_PUMP_DIR;
c. Repeat Steps 1 & 3 and see if you get “Success output”, if you get it the proceed with next steps
V. If no error resolutions were as in Step 2: Note 2. Please do the following steps just for any one tablespace. This is to avoid an “ORA-“error or target server regarding SYSTEM file header before proceeding further on. This will update the system file header.
EX: SQL> alter tablespace HRLARGE offline;
SQL> alter tablespace HRLARGE online;
2. Backup control file to trace using the below command and shutdown database
SQL> alter database backup controlfile to trace;
The trace file is generated in the udump directory by the format xxxxxx.trc.
3. Copy the control file trace to temporary location where you need to stage the database for conversion AIX_Server1:/aix_convp. Remember this is the actual location where you will placing the database and will bringing up for future production usage.
4. Copy the init.ora file of ORCL_COPY to AIX_Server1:/aix_conv/ORCL_COPY
$ scp initORCL.ora AIX_Server:/aix_conv/ORCL_COPY
5. Get a count of data files on the source server Solaris_Server1 for database ORCL_COPY
SQL> spool ORCL_data_file_count.log
SQL> select count(*) from dba_data_files;
SQL> spool off
6. Shutdown the database ORCL_COPY on Solaris_Server.
SQL> select instance_name from v$instance;
INSTANCE_NAME
-------------
ORCL_COPY
SQL> shutdown immediate;
7. Change back the init parameters and copy the init.ora file of ORCL_COPY to AIX_Server1:/backup
Delete below entries from the init.ora file:
db_cache_size =1000M
pga_aggregate_target =2000M
shared_pool_size =1000M
Uncomment below parameters:
db_cache_size =4000M
pga_aggregate_target =9000M
shared_pool_size =5000M
$ scp initORCL_COPY1.ora AIX_Server1:/backup/ORCL_COPY
8. Comment out any ORCL_COPY entries in oratab file to avoid future auto startup of the database incase server is rebooted.
$cd /var/opt/oracle
And comment out below entry:
ORCL_COPY:/optware/oracle/10.2.0.4:Y
9. Transfer the files from Solaris_Server1 to AIX_Server1. Make directories with 50Gb each in size within the filesystems to have the datafiles to be copied in multiple streams and to avoid file too large errors on the unix size while copying. :
An Example of break-up of files and how to copy:
/u01=171GB db=57.3 GB db1=57.9 GB db2=56.3 GB
$ cd /aix_conv/ORCL_COPY/u01
$ mkdir db db1 db2
Directory structure under /aix_conv/ORCL_COPY/u01
Ex: /aix_conv/ORCL_COPY/u01/db
/db1
/db2
$ mv *hrlarge.0*dbf db; mv *hrlarge.12.dbf db; mv *hrlarge.13.dbf db
$ mv *hrlarge*.dbf db1; mv *usrtblspc*dbf db1;mv srtr1.pttbl.01.dbf db1
$ mv *.dbf db2
cd /aix_conv/ORCL_COPY/u01/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &
cd /aix_conv/ORCL_COPY/u01/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &
cd /aix_conv/ORCL_COPY/u01/db2
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &
/u03=169.8 GB db=57.3 GB db1= 56.6 GB db2= 55.8 GB
$ cd /aix_conv/ORCL_COPY/u03
$ mkdir db db1 db2
$ mv *hrlarge.0*dbf db; mv *hrlarge.10.dbf db; mv *hrlarge.11.dbf db; mv *hrlarge.14.dbf db
$ mv *hrlarge*.dbf db1; mv *ewlarge.0* db1;
$ mv *.dbf db2; mv db1/srt1.hrlarge.27.dbf db2;
cd /backup/ORCL_COPY/u03/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &
cd /backup/ORCL_COPY/u03/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &
cd /backup/ORCL_COPY/u03/db2
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &
/u02=4.6 GB
cd /aix_conv/ORCL_COPY/u02
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &
/u06=4.2 GB
cd /aix_conv/ORCL_COPY/u06
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /backup/u09 ;zcat | tar xvf -" &
Stop here, once /u02 and /u06 finish, start /02 and /04 just to avoid any
performance impact on source database due these copy file sessions when you have
both source and copy of the source on the same server
/u12=106.9 GB db=52.8 GB db1=54.1 GB
$ cd /aix_conv/ORCL_COPY/u02
$ mkdir db db1
$ mv *hrindex.0*dbf db; mv *hrindex.11.dbf db; mv *hrindex.13.dbf db; mv *psindex.15.dbf db; mv *hrindex.17.dbf db
$ mv *.dbf db1
cd /aix_conv/ORCL_COPY/u02/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &
cd /aix_conv/ORCL_COPY/u02/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &
/u14=92.69 GB db=48.12 GB db1=44.56 GB
$ cd /aix_conv/ORCL_COPY/u04
$ mkdir db db1
$ mv *.0* db; mv *.1* db
$ mv *.dbf db1
cd /aix_conv/ORCL_COPY/u04/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u04 ;zcat | tar xvf -" &
cd /aix_conv/ORCL_COPY/u04/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u04 ;zcat | tar xvf -" &
10. Also make sure that you have copied the correct control file trace generated and init.ora files to /backup on AIX_Server1
Locations of the files on Solaris_Server1:
a. trace file location: /backup/ORCL_COPY/oradump
b. init.ora file location: /optware/oracle/ORCL_COPY
Check if all datafiles have been copied correctly
Note: After completion of above confirm the transfer of files.
Ways to check the datafiles have been copied correctly
Compare the datafile count of the file systems of ORCL and ORCL_COPY. Run ORCL_datafile_count.ksh and ORCL_COPY_datafile_count.ksh
Ex- cat ORCL_COPY_datafile_count.ksh on SOLARIS
ls -ltr /aix_conv/ORCL_COPY/u01/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u02/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u03/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u04/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u05/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u06/*.dbf | wc –l
cat ORCL_COPY_datafile_count.ksh on AIX
ls -ltr /aix_conv/ORCL_COPY/u01/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u02/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u03/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u04/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u05/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u06/*.dbf | wc –l
Run ps –ef | grep ORCL_COPY on Solaris_Server which will show copy processes since we run them in background.
Conversion on Target:
Bring up the database ORCL_COPY and Convert the datafiles of ORCL_COPY on the target server (AIX_Server1)
Note: Create any missing directories
1. Create and entry for ORCL_COPY in /etc/oratab for the 10.2.0.3_Q108 ORACLE_HOME
ORCL_COPY:/optware/oracle/10.2.0.3_Q108:Y
2. Copy the init.ora file from /backup to/ dbms/oracle/local/ORCL_COPY/etc
$ cp initORCL_COPY.ora /dbms/oracle/local/ORCL_COPY/etc
3. Go to /dbms/oracle/local/ORCL_COPY/etc and
Edit the following in init.ora for ORCL_COPY located in /dbms/oracle/local/ORCL_COPY/etc to reflect the correct file systems.
AUDIT_FILE_DEST ='/aix_conv/ORCL_COPY/oradump/adump'
background_dump_dest='/aix_conv/ORCL_COPY/oradump/bdump'
core_dump_dest='/aix_conv/ORCL_COPY/oradump/cdump'
user_dump_dest='/aix_conv/ORCL_COPY/oradump/udump'
control_files='/aix_conv/ORCL_COPY/u01/SRTR1.control.01.ctl',
'/aix_conv/ORCL_COPY/u02/SRTR1.control.02.ctl',
'/aix_conv/ORCL_COPY/u03/SRTR1.control.03.ctl';
db_name=’ORCL_COPY’
log_archive_dest_1='LOCATION=/aix_conv/ORCL_COPY/archive'
log_archive_format='ORCL_COPY_%s_%t_%r.arc'
service_names=' ORCL_COPY.world',' ORCL_COPY'
4. Run the env file in /dbms/oracle/local/ORCL_COPY/etc to set up environment for ORCL_COPY.
oracle@AIX_Server1[ORCL_COPY] /dbms/oracle/local/ORCL_COPY/etc > . ORCL_COPY.env
5. Make sure the environment is set correctly for SRTR1. Use below
$echo $ORACLE_SID
$echo $ORACLE_HOME
$echo $PATH
6. Create a link in $ORACLE_HOME/dbs for init.ora
$ cd $ORACLE_HOME/dbs
$ ln -s /dbms/oracle/local/ORCL_COPY/etc/initORCL_COPY.ora initORCL_COPY.ora
7. Make a copy of the trace file under /backup to /optware/oracle/work/UPGRADE_PROD and rename it from xxxx.trc to controlfile_create_SRTR1.sql
$ cp ORCL_COPY_ora_22678.trc /optware/oracle/work/UPGRADE_PROD
$ cd /optware/oracle/work/UPGRADE_PROD
$ mv xxxxxx.trc create_controlfile_ORCL_COPY.sql
8. Modify the create_controlfile_ORCL_COPY.sql under /optware/oracle/work/UPGRADE to reflect the correct paths for data files and the correct SID name like below
a. Modify “REUSE” to “SET” in the control file trace and
b. /aix_conv/ORCL_COPY/u04/*.dbf to /aix_conv/ORCL_COPY/u04/*.dbf
c. /aix_conv/ORCL_COPY/u06/*.dbf to /aix_conv/ORCL_COPY/u06/*.dbf
9. Check if all the datafiles are in correct location where the database needs to be staged. Cd to /optware/oracle/work/upgrade. Log on to sqlplus and Startup nomount the database
$cd /optware/oracle/work/UPGRADE
$ sqlplus
SQL> startup nomount
10. Run the control file create sql
SQL> @controlfile_create_ORCL_COPY.sql
Note: Check for any errors in the alert log while creating the control files and resolve them before proceeding with next step.
11. Open the database with reset logs option
SQL> alter database open resetlogs;
12. Add a temp file to the temporary tablespace.
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/aix_conv/ORCL_COPY/ORCL_COPY.temp.10.dbf' SIZE 2000M REUSE AUTOEXTEND OFF;
13. Check list of platforms that we can convert source database into as below
SQL> SELECT * FROM v$db_transportable_platform;
14. Open the database ORCL_COPY in READ ONLY MODE;
SQL> shutdown immediate
SQL> startup mount
SQL> alter database open read only;
14. Make sure that the database in READ ONLY mode
SQL> select NAME,OPEN_MODE from v$database;
NAME OPEN_MODE
--------- ----------
ORCL_COPY READ ONLY
SQL> exit
13. Create a directory named convertdb under /optware/oracle/10.2.0.3_Q108/dbs
$cd $ORACLE_HOME/dbs
$mkdir convertdb
15. Open another session and monitor the alert log to see any error during conversion
16. At this point the RMAN convert database can be used:
$ cd /optware/oracle/10.2.0.3_Q108/
$ rman
--Connect to the target
RMAN> connect target;
--the below converts the datafiles under /aix_conv/ORCL_COPY/uXX to AIX format
and places them under /uXX/oradata/ORCL/. It also creates transport script
in /optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript.
RMAN> CONVERT DATABASE NEW DATABASE 'ORCL_NEW'
transport script '/optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript'
to platform 'AIX-Based Systems (64-bit)'
db_file_name_convert
'/aix_conv/ORCL_COPY/u01' '/u01/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u02' '/u02/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u03' '/u03/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u04' '/u04/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u05' '/u05/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u06' '/u06/oradata/ORCL_NEW' ;
Note: After successful conversion of the data files success messages are prompted and the converted data files are created under uxx/oradata/ORCL/db. The conversion should take approximately 6 hours.
The conversion success message at the end of the conversion is as the sample below
************************************************************
input datafile fno=00101 name=/aix_conv/ORCL_COPY/u02/ORCL_COPY.hrtrpts.01.dbf
converted datafile=/u02/oradata/ORCL_NEW/ORCL_NEW.hrtrpts.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00102 name=/aix_conv/ORCL_COPY/u01/ORCL_COPY.hrwlarge.01.dbf
converted datafile=/u01/oradata/ORCL_NEW/ORCL_NEW.hrwlarge.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00103 name==/aix_conv/ORCL_COPY/u03/oradata/ORCL/ORCL.pcapp.01.dbf
converted datafile=/u03/oradata/ORCL_NEW/ORCL_NEW.pcapp.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00104 name==/aix_conv/ORCL_COPY/u01/SRTR1.bplarge.01.dbf
converted datafile=/u01/oradata/ORCL_NEW/ORCL_NEW.hplarge.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Run SQL script /optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript on the target platform to create database
Edit init.ora file /optware/oracle/10.2.0.3_Q108/dbs/init_00k7onuv_1_0.ora. This PFILE will be used to create the database on the target platform
To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the target platform
To change the internal database identifier, use DBNEWID Utility
Finished backup at 19-FEB-09
15. After successful completion of RMAN conversion exit out.
RMAN > exit
Bring up database ORCL_NEW:
1. A new control file creation script “transportscript” is created under /optware/oracle/10.2.0.3_Q108/dbs/convertdb
2. Rename the datafiles from ORCL to ORCL_NEW by executing the script rename_files_from_ORCL_COPY_to_ORCL_NEW.ksh located in /optware/oracle/work/UPGRADE_PROD
3. Point the Oracle Home of ORCL_NEW to 10.2.0.3_Q108 in /etc/oratab like below.
ORCL_NEW:/optware/oracle/10.2.0.3_Q108_dbv:Y
4. Run the env file located in /dbms/oracle/local/ORCL_NEW/etc.
$ /dbms/oracle/local/ORCL_NEW/etc > . ORCL_NEW.env
5. Make sure the environment is set correctly for ORCL_NEW. Use below
$echo $ORACLE_SID
$echo $ORACLE_HOME
$echo $PATH
6. Go to $ORACLE_HOME/dbs and create a soft link for initORCL_NEW.ora in $ORACLE_HOME/dbs to point to /dbms/oracle/local/ORCL_NEW/etc. Use below
ln -s /dbms/oracle/local/ORCL_NEW/etc/initORCL_NEW.ora initORCL_NEW.ora
7. Make changes to initORCL_NEW.ora to reflect
AUDIT_FILE_DEST =' /u01/oradump/ORCL_NEW/adump'
background_dump_dest=' /u01/oradump/ORCL_NEW/bdump'
core_dump_dest=' /u01/oradump/ORCL_NEW/cdump'
user_dump_dest=' /u01/oradump/ORCL_NEW/udump'
control_files ='/u01/oradata/ORCL_NEW/ORCL_NEW.control.01.ctl',
'/u02/oradata/ORCL_NEW/ORCL_NEW.control.02.ctl',
'/u03/oradata/ORCL_NEW/ORCL_NEW.control.03.ctl'
db_name=’ORCL_NEW’
log_archive_dest_1='LOCATION=/u07/oradata/ORCL_NEW'
undo_tablespace ='HRRBS'
log_archive_format='ORCL_NEW_%s_%t_%r.arc'
service_names='ORCL_NEW.world','ORCL_NEW'
08. Make a copy of create_controlfile_SRTR1.sql in /optware/oracle/work/UPGRADE_PROD to create_controlfile_ORCL_NEW.sql (RECOMMENDED)
$/optware/oracle/work/UPGRADE_PROD>cp create_controlfile_ORCL_NEW.sql create_controlfile_ORCL_NEW.sql
OR use the transport script to create a control file…
*************************
Modify "transportscript" script to reflect the converted data files location, intiORCL_NEW.ora location and the location for utlxxx scripts as per the ORACLE_HOME. Also make other any other necessary changes.
*********************
09. Modify the create_controlfile_GDPFPRG.sql to reflect the following:
ORCL_COPY to ORCL_NEW
Redo log locations as follows since I am using raw file systems here :
GROUP 1 (
'/dev/roraORCL_NEWr1_01',
'/dev/roraORCL_NEWr2_01'
) SIZE 495M,
GROUP 2 (
'/dev/roraORCL_NEWr1_02',
'/dev/roraORCL_NEWr2_02'
) SIZE 495M,
GROUP 3 (
'/dev/roraORCL_NEWr1_03',
'/dev/roraORCL_NEWr2_03'
) SIZE 495M,
GROUP 4 (
'/dev/roraORCL_NEWr1_04',
'/dev/roraORCL_NEWr2_04'
) SIZE 495M,
GROUP 5 (
'/dev/roraORCL_NEWr1_05',
'/dev/roraORCL_NEWr2_05'
) SIZE 495M
Modify the datafile locations as below
'/aix_conv/ORCL_COPY/u01' to '/u01/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u02' to '/u02/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u03' to '/u03/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u04' to '/u04/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u05' to '/u05/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u06' to '/u06/oradata/ORCL_NEW';
10. Open another session and monitor alert log for ORCL_NEW in /u01/oradump/ORCL_NEW/bdump
11. cd to /optware/oracle/work/UPGRADE_PROD directory Run the create_controfile_ORCL_NEW.sql script in your work directory
Ex: $SQLPLUS
SQL> startup nomount
SQL>@/optware/oracle/work/UPGRADE_PROD/create_controlfile_ORCL_NEW.sql
SQL> alter database open resetlogs;
12. Add temp file to PSTEMP tablespace
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/oradata/ORCL_NEW/ORCL_NEW.temp.10.dbf' SIZE 2000M REUSE AUTOEXTEND OFF;
13. Shutdown and startup just to see if any errors in the alert log. Closely monitor the alert log during this
SQL> shutdown immediate
SQL> startup
14. Run utlirp.sql by shutting down and startup in upgrade mode.
SQL> shutdown immediate
SQL> startup upgrade;
SQL> @?/rdbms/admin/utlirp.sql
15. Run utlrp.sql after shutting down the database
SQL> shutdown immediate
SQL> startup
SQL> @ ?/rdbms/admin/utlrp.sql
16. Drop the OLD temporary tablespace TEMP
SQL> drop tablespace TEMP including datafiles and contents cascade constraints;
SQL> drop tablespace pstemp including contents and datafiles;
17. Create temporary tablespace NEW_TEMP
SQL> CREATE TEMPORARY TABLESPACE "NEW_TEMP" TEMPFILE '/dev/roraORCL_NEWt_001' SIZE 3990M REUSE ;
18. Go to /optware/oracle/work/UPGRADE_PROD and run add_temp_files_ORCL_NEW.sql to add rest of the temp files as needed
SQL> spool add_temp_files_ORCL_NEW.log
SQL> add_temp_files_ORCL_NEW.sql
20. Create block changing data file to enable incremental backups of the database
SQL> alter database enable block change tracking using file '/u13/oradata/ORCL_NEW/ORCL_NEW.bct.01.dbf' reuse;
21. Update the PS OWNER table if you database is a PeopleSoft Database in my case it is.
SQL> select * from psdbowner;
DBNAME OWNERID
-------- --------
ORCL SYSADM
SQL> update psdbowner set dbname='ORCL_NEW';
Sql> commit;
SQL> select * from psdbowner;
DBNAME OWNERID
-------- --------
ORCL_NEW SYSADM
24. Configure and bring up the listener.
1. Some Pre-Requisites or assumptions of source and target directories etc:
ORCL Directory Structure for datafiles Solaris_Server1 or SOURCE server:
/u01/ORCL/oradata/
/u02/ORCL/oradata/
/u03/ORCL/oradata/
/u04/ORCL/oradata/
/u05/ORCL/oradata/
/u06/ORCL/oradata/
Directory for environment files: /dbms/oracle/local/ORCL/etc
ORCL_COPY Directory Structures for datafiles Solaris_Server1 or SOURCE server:
/aix_conv/ORCL_COPY/u01/
/aix_conv/ORCL_COPY/u02/
/aix_conv/ORCL_COPY/u03/
/aix_conv/ORCL_COPY/u04/
/aix_conv/ORCL_COPY/u05/
/aix_conv/ORCL_COPY/u06/
Directory for environment files: /dbms/oracle/local/ORCL_COPY/etc
ORCL_COPY Directory Structures for datafiles AIX_Server1 or TARGET server:
/aix_conv/ORCL_COPY/u01/
/aix_conv/ORCL_COPY/u02/
/aix_conv/ORCL_COPY/u03/
/aix_conv/ORCL_COPY/u04/
/aix_conv/ORCL_COPY/u05/
/aix_conv/ORCL_COPY/u06/
Directory for environment files: /dbms/oracle/local/ORCL_COPY/etc
ORCL_COPY Directory Structures for datafiles AIX_Server1 or TARGET server:
/u01/ORCL_NEW/oradata/
/u02/ORCL_NEW/oradata/
/u03/ORCL_NEW/oradata/
/u04/ORCL_NEW/oradata/
/u05/ORCL_NEW/oradata/
/u06/ORCL_NEW/oradata/
Directory for environment files: /dbms/oracle/local/ORCL_NEW/etc
2. For ORCL_NEW we will be using raw data files for redo logs and.
Pre-Requisite Checks and prep work to transfer the data files to TARGET Server:
1. Some Pre-Conditions to be made on source(copy) or newly created database ORCL_COPY from ORCL:
I. Check list of platforms that we can convert source database into as below.
SQL> SELECT * FROM v$db_transportable_platform;
II. Shutdown and start newly created ORCL_COPY database in "READ ONLY" mode.
SQL> shutdown immediate;
Comment out below parameters:
audit_trail ='DB'
audit_file_dest='/aix_conv/ORCL_COPY/oradata'
db_cache_size =4000M
pga_aggregate_target =9000M
shared_pool_size =5000M
Add below entries from the init.ora file:
db_cache_size =1000M
pga_aggregate_target =2000M
shared_pool_size =1000M
Note: The above change of values to three parameters are recommended since we
don't need high memory values for a copy of the source database and also
recommended in order to minimize impact on the actual database.
SQL> startup mount;
SQL> alter database open read only;
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- ----------
ORCL_COPY READ ONLY
III.Use the following PL/SQL to ensure that no Internal Conditions exist that would prevent the transport of the database.
SQL> set serveroutput on
declare
db_ready boolean;
begin
db_ready:= dbms_tdb.check_db ('AIX-Based Systems (64-bit)', dbms_tdb.skip_none);
End;
/
Success output should be:
PS/PL SQL executed successfully
In case of an error output:
Note 1: If output error is similar to:
declare
*
ERROR at line 1:
ORA-25153: Temporary Tablespace is Empty
ORA-06512: at "SYS.DBMS_TDB", line 346
ORA-06512: at "SYS.DBMS_TDB", line 397
ORA-06512: at line 4
Resolution:
a. Shutdown the database and startup is READ/WRITE mode
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
b. Add a datafile to TEMP Tablespace and proceed to next step
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE ‘file_location_as_of_your_other_datafiles’ SIZE 500MB;
c. Repeat steps 1 & 2 again and see if you get the “Success output”, if you get it then proceed with next steps
Note 2: If the output error is similar to
Some files in the database have not been converted to the 10.0 compatibility format. Open the database in READ/WRITE mode and alter tablespaces HRAPP, HRLARGE, HRWORK, ............... to READ/WRITE. Then open the database in READ-ONLY mode and retry.
PL/SQL procedure successfully completed.
Resolution:
a. Shutdown the database and startup in READ/WRITE mode
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
b. Take one of the above mentioned tablespaces offline and bring them back online again.
SQL> alter tablespace HRLARGE offline;
SQL> alter tablespace HRLARGE online;
Note: Repeat Step b similarly for all the tablespaces as mentioned in Note2 output before proceeding.
c. Repeat steps 1 & 2 again and see if you get the “Success output”, if you get it proceed with next steps
IV. Execute below PL/SQL to identify any external objects.
SQL> set serveroutput on
declare
external boolean;
begin
external := dbms_tdb.check_external;
end;
/
Success output:
PS/PL SQL executed successfully
In case of an error output:
Note: If output ERROR is something similar to:
The following directories exist in the database:
SYS.DATA_PUMP_DIR
Resolution:
a. Shutdown and startup the database
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
b. Drop the above mentioned directory
SQL> DROP DIRECTORY DATA_PUMP_DIR;
c. Repeat Steps 1 & 3 and see if you get “Success output”, if you get it the proceed with next steps
V. If no error resolutions were as in Step 2: Note 2. Please do the following steps just for any one tablespace. This is to avoid an “ORA-“error or target server regarding SYSTEM file header before proceeding further on. This will update the system file header.
EX: SQL> alter tablespace HRLARGE offline;
SQL> alter tablespace HRLARGE online;
2. Backup control file to trace using the below command and shutdown database
SQL> alter database backup controlfile to trace;
The trace file is generated in the udump directory by the format xxxxxx.trc.
3. Copy the control file trace to temporary location where you need to stage the database for conversion AIX_Server1:/aix_convp. Remember this is the actual location where you will placing the database and will bringing up for future production usage.
4. Copy the init.ora file of ORCL_COPY to AIX_Server1:/aix_conv/ORCL_COPY
$ scp initORCL.ora AIX_Server:/aix_conv/ORCL_COPY
5. Get a count of data files on the source server Solaris_Server1 for database ORCL_COPY
SQL> spool ORCL_data_file_count.log
SQL> select count(*) from dba_data_files;
SQL> spool off
6. Shutdown the database ORCL_COPY on Solaris_Server.
SQL> select instance_name from v$instance;
INSTANCE_NAME
-------------
ORCL_COPY
SQL> shutdown immediate;
7. Change back the init parameters and copy the init.ora file of ORCL_COPY to AIX_Server1:/backup
Delete below entries from the init.ora file:
db_cache_size =1000M
pga_aggregate_target =2000M
shared_pool_size =1000M
Uncomment below parameters:
db_cache_size =4000M
pga_aggregate_target =9000M
shared_pool_size =5000M
$ scp initORCL_COPY1.ora AIX_Server1:/backup/ORCL_COPY
8. Comment out any ORCL_COPY entries in oratab file to avoid future auto startup of the database incase server is rebooted.
$cd /var/opt/oracle
And comment out below entry:
ORCL_COPY:/optware/oracle/10.2.0.4:Y
9. Transfer the files from Solaris_Server1 to AIX_Server1. Make directories with 50Gb each in size within the filesystems to have the datafiles to be copied in multiple streams and to avoid file too large errors on the unix size while copying. :
An Example of break-up of files and how to copy:
/u01=171GB db=57.3 GB db1=57.9 GB db2=56.3 GB
$ cd /aix_conv/ORCL_COPY/u01
$ mkdir db db1 db2
Directory structure under /aix_conv/ORCL_COPY/u01
Ex: /aix_conv/ORCL_COPY/u01/db
/db1
/db2
$ mv *hrlarge.0*dbf db; mv *hrlarge.12.dbf db; mv *hrlarge.13.dbf db
$ mv *hrlarge*.dbf db1; mv *usrtblspc*dbf db1;mv srtr1.pttbl.01.dbf db1
$ mv *.dbf db2
cd /aix_conv/ORCL_COPY/u01/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &
cd /aix_conv/ORCL_COPY/u01/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &
cd /aix_conv/ORCL_COPY/u01/db2
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u01 ;zcat | tar xvf -" &
/u03=169.8 GB db=57.3 GB db1= 56.6 GB db2= 55.8 GB
$ cd /aix_conv/ORCL_COPY/u03
$ mkdir db db1 db2
$ mv *hrlarge.0*dbf db; mv *hrlarge.10.dbf db; mv *hrlarge.11.dbf db; mv *hrlarge.14.dbf db
$ mv *hrlarge*.dbf db1; mv *ewlarge.0* db1;
$ mv *.dbf db2; mv db1/srt1.hrlarge.27.dbf db2;
cd /backup/ORCL_COPY/u03/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &
cd /backup/ORCL_COPY/u03/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &
cd /backup/ORCL_COPY/u03/db2
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u03 ;zcat | tar xvf -" &
/u02=4.6 GB
cd /aix_conv/ORCL_COPY/u02
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &
/u06=4.2 GB
cd /aix_conv/ORCL_COPY/u06
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /backup/u09 ;zcat | tar xvf -" &
Stop here, once /u02 and /u06 finish, start /02 and /04 just to avoid any
performance impact on source database due these copy file sessions when you have
both source and copy of the source on the same server
/u12=106.9 GB db=52.8 GB db1=54.1 GB
$ cd /aix_conv/ORCL_COPY/u02
$ mkdir db db1
$ mv *hrindex.0*dbf db; mv *hrindex.11.dbf db; mv *hrindex.13.dbf db; mv *psindex.15.dbf db; mv *hrindex.17.dbf db
$ mv *.dbf db1
cd /aix_conv/ORCL_COPY/u02/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &
cd /aix_conv/ORCL_COPY/u02/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u02 ;zcat | tar xvf -" &
/u14=92.69 GB db=48.12 GB db1=44.56 GB
$ cd /aix_conv/ORCL_COPY/u04
$ mkdir db db1
$ mv *.0* db; mv *.1* db
$ mv *.dbf db1
cd /aix_conv/ORCL_COPY/u04/db
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u04 ;zcat | tar xvf -" &
cd /aix_conv/ORCL_COPY/u04/db1
tar -cvf - *.dbf | compress | ssh AIX_Server1 "cd /aix_conv/ORCL_COPY/u04 ;zcat | tar xvf -" &
10. Also make sure that you have copied the correct control file trace generated and init.ora files to /backup on AIX_Server1
Locations of the files on Solaris_Server1:
a. trace file location: /backup/ORCL_COPY/oradump
b. init.ora file location: /optware/oracle/ORCL_COPY
Check if all datafiles have been copied correctly
Note: After completion of above confirm the transfer of files.
Ways to check the datafiles have been copied correctly
Compare the datafile count of the file systems of ORCL and ORCL_COPY. Run ORCL_datafile_count.ksh and ORCL_COPY_datafile_count.ksh
Ex- cat ORCL_COPY_datafile_count.ksh on SOLARIS
ls -ltr /aix_conv/ORCL_COPY/u01/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u02/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u03/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u04/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u05/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u06/*.dbf | wc –l
cat ORCL_COPY_datafile_count.ksh on AIX
ls -ltr /aix_conv/ORCL_COPY/u01/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u02/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u03/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u04/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u05/*.dbf | wc -l
ls -ltr /aix_conv/ORCL_COPY/u06/*.dbf | wc –l
Run ps –ef | grep ORCL_COPY on Solaris_Server which will show copy processes since we run them in background.
Conversion on Target:
Bring up the database ORCL_COPY and Convert the datafiles of ORCL_COPY on the target server (AIX_Server1)
Note: Create any missing directories
1. Create and entry for ORCL_COPY in /etc/oratab for the 10.2.0.3_Q108 ORACLE_HOME
ORCL_COPY:/optware/oracle/10.2.0.3_Q108:Y
2. Copy the init.ora file from /backup to/ dbms/oracle/local/ORCL_COPY/etc
$ cp initORCL_COPY.ora /dbms/oracle/local/ORCL_COPY/etc
3. Go to /dbms/oracle/local/ORCL_COPY/etc and
Edit the following in init.ora for ORCL_COPY located in /dbms/oracle/local/ORCL_COPY/etc to reflect the correct file systems.
AUDIT_FILE_DEST ='/aix_conv/ORCL_COPY/oradump/adump'
background_dump_dest='/aix_conv/ORCL_COPY/oradump/bdump'
core_dump_dest='/aix_conv/ORCL_COPY/oradump/cdump'
user_dump_dest='/aix_conv/ORCL_COPY/oradump/udump'
control_files='/aix_conv/ORCL_COPY/u01/SRTR1.control.01.ctl',
'/aix_conv/ORCL_COPY/u02/SRTR1.control.02.ctl',
'/aix_conv/ORCL_COPY/u03/SRTR1.control.03.ctl';
db_name=’ORCL_COPY’
log_archive_dest_1='LOCATION=/aix_conv/ORCL_COPY/archive'
log_archive_format='ORCL_COPY_%s_%t_%r.arc'
service_names=' ORCL_COPY.world',' ORCL_COPY'
4. Run the env file in /dbms/oracle/local/ORCL_COPY/etc to set up environment for ORCL_COPY.
oracle@AIX_Server1[ORCL_COPY] /dbms/oracle/local/ORCL_COPY/etc > . ORCL_COPY.env
5. Make sure the environment is set correctly for SRTR1. Use below
$echo $ORACLE_SID
$echo $ORACLE_HOME
$echo $PATH
6. Create a link in $ORACLE_HOME/dbs for init.ora
$ cd $ORACLE_HOME/dbs
$ ln -s /dbms/oracle/local/ORCL_COPY/etc/initORCL_COPY.ora initORCL_COPY.ora
7. Make a copy of the trace file under /backup to /optware/oracle/work/UPGRADE_PROD and rename it from xxxx.trc to controlfile_create_SRTR1.sql
$ cp ORCL_COPY_ora_22678.trc /optware/oracle/work/UPGRADE_PROD
$ cd /optware/oracle/work/UPGRADE_PROD
$ mv xxxxxx.trc create_controlfile_ORCL_COPY.sql
8. Modify the create_controlfile_ORCL_COPY.sql under /optware/oracle/work/UPGRADE to reflect the correct paths for data files and the correct SID name like below
a. Modify “REUSE” to “SET” in the control file trace and
b. /aix_conv/ORCL_COPY/u04/*.dbf to /aix_conv/ORCL_COPY/u04/*.dbf
c. /aix_conv/ORCL_COPY/u06/*.dbf to /aix_conv/ORCL_COPY/u06/*.dbf
9. Check if all the datafiles are in correct location where the database needs to be staged. Cd to /optware/oracle/work/upgrade. Log on to sqlplus and Startup nomount the database
$cd /optware/oracle/work/UPGRADE
$ sqlplus
SQL> startup nomount
10. Run the control file create sql
SQL> @controlfile_create_ORCL_COPY.sql
Note: Check for any errors in the alert log while creating the control files and resolve them before proceeding with next step.
11. Open the database with reset logs option
SQL> alter database open resetlogs;
12. Add a temp file to the temporary tablespace.
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/aix_conv/ORCL_COPY/ORCL_COPY.temp.10.dbf' SIZE 2000M REUSE AUTOEXTEND OFF;
13. Check list of platforms that we can convert source database into as below
SQL> SELECT * FROM v$db_transportable_platform;
14. Open the database ORCL_COPY in READ ONLY MODE;
SQL> shutdown immediate
SQL> startup mount
SQL> alter database open read only;
14. Make sure that the database in READ ONLY mode
SQL> select NAME,OPEN_MODE from v$database;
NAME OPEN_MODE
--------- ----------
ORCL_COPY READ ONLY
SQL> exit
13. Create a directory named convertdb under /optware/oracle/10.2.0.3_Q108/dbs
$cd $ORACLE_HOME/dbs
$mkdir convertdb
15. Open another session and monitor the alert log to see any error during conversion
16. At this point the RMAN convert database can be used:
$ cd /optware/oracle/10.2.0.3_Q108/
$ rman
--Connect to the target
RMAN> connect target;
--the below converts the datafiles under /aix_conv/ORCL_COPY/uXX to AIX format
and places them under /uXX/oradata/ORCL/. It also creates transport script
in /optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript.
RMAN> CONVERT DATABASE NEW DATABASE 'ORCL_NEW'
transport script '/optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript'
to platform 'AIX-Based Systems (64-bit)'
db_file_name_convert
'/aix_conv/ORCL_COPY/u01' '/u01/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u02' '/u02/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u03' '/u03/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u04' '/u04/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u05' '/u05/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u06' '/u06/oradata/ORCL_NEW' ;
Note: After successful conversion of the data files success messages are prompted and the converted data files are created under uxx/oradata/ORCL/db. The conversion should take approximately 6 hours.
The conversion success message at the end of the conversion is as the sample below
************************************************************
input datafile fno=00101 name=/aix_conv/ORCL_COPY/u02/ORCL_COPY.hrtrpts.01.dbf
converted datafile=/u02/oradata/ORCL_NEW/ORCL_NEW.hrtrpts.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00102 name=/aix_conv/ORCL_COPY/u01/ORCL_COPY.hrwlarge.01.dbf
converted datafile=/u01/oradata/ORCL_NEW/ORCL_NEW.hrwlarge.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00103 name==/aix_conv/ORCL_COPY/u03/oradata/ORCL/ORCL.pcapp.01.dbf
converted datafile=/u03/oradata/ORCL_NEW/ORCL_NEW.pcapp.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00104 name==/aix_conv/ORCL_COPY/u01/SRTR1.bplarge.01.dbf
converted datafile=/u01/oradata/ORCL_NEW/ORCL_NEW.hplarge.01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Run SQL script /optware/oracle/10.2.0.3_Q108/dbs/convertdb/transportscript on the target platform to create database
Edit init.ora file /optware/oracle/10.2.0.3_Q108/dbs/init_00k7onuv_1_0.ora. This PFILE will be used to create the database on the target platform
To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the target platform
To change the internal database identifier, use DBNEWID Utility
Finished backup at 19-FEB-09
15. After successful completion of RMAN conversion exit out.
RMAN > exit
Bring up database ORCL_NEW:
1. A new control file creation script “transportscript” is created under /optware/oracle/10.2.0.3_Q108/dbs/convertdb
2. Rename the datafiles from ORCL to ORCL_NEW by executing the script rename_files_from_ORCL_COPY_to_ORCL_NEW.ksh located in /optware/oracle/work/UPGRADE_PROD
3. Point the Oracle Home of ORCL_NEW to 10.2.0.3_Q108 in /etc/oratab like below.
ORCL_NEW:/optware/oracle/10.2.0.3_Q108_dbv:Y
4. Run the env file located in /dbms/oracle/local/ORCL_NEW/etc.
$ /dbms/oracle/local/ORCL_NEW/etc > . ORCL_NEW.env
5. Make sure the environment is set correctly for ORCL_NEW. Use below
$echo $ORACLE_SID
$echo $ORACLE_HOME
$echo $PATH
6. Go to $ORACLE_HOME/dbs and create a soft link for initORCL_NEW.ora in $ORACLE_HOME/dbs to point to /dbms/oracle/local/ORCL_NEW/etc. Use below
ln -s /dbms/oracle/local/ORCL_NEW/etc/initORCL_NEW.ora initORCL_NEW.ora
7. Make changes to initORCL_NEW.ora to reflect
AUDIT_FILE_DEST =' /u01/oradump/ORCL_NEW/adump'
background_dump_dest=' /u01/oradump/ORCL_NEW/bdump'
core_dump_dest=' /u01/oradump/ORCL_NEW/cdump'
user_dump_dest=' /u01/oradump/ORCL_NEW/udump'
control_files ='/u01/oradata/ORCL_NEW/ORCL_NEW.control.01.ctl',
'/u02/oradata/ORCL_NEW/ORCL_NEW.control.02.ctl',
'/u03/oradata/ORCL_NEW/ORCL_NEW.control.03.ctl'
db_name=’ORCL_NEW’
log_archive_dest_1='LOCATION=/u07/oradata/ORCL_NEW'
undo_tablespace ='HRRBS'
log_archive_format='ORCL_NEW_%s_%t_%r.arc'
service_names='ORCL_NEW.world','ORCL_NEW'
08. Make a copy of create_controlfile_SRTR1.sql in /optware/oracle/work/UPGRADE_PROD to create_controlfile_ORCL_NEW.sql (RECOMMENDED)
$/optware/oracle/work/UPGRADE_PROD>cp create_controlfile_ORCL_NEW.sql create_controlfile_ORCL_NEW.sql
OR use the transport script to create a control file…
*************************
Modify "transportscript" script to reflect the converted data files location, intiORCL_NEW.ora location and the location for utlxxx scripts as per the ORACLE_HOME. Also make other any other necessary changes.
*********************
09. Modify the create_controlfile_GDPFPRG.sql to reflect the following:
ORCL_COPY to ORCL_NEW
Redo log locations as follows since I am using raw file systems here :
GROUP 1 (
'/dev/roraORCL_NEWr1_01',
'/dev/roraORCL_NEWr2_01'
) SIZE 495M,
GROUP 2 (
'/dev/roraORCL_NEWr1_02',
'/dev/roraORCL_NEWr2_02'
) SIZE 495M,
GROUP 3 (
'/dev/roraORCL_NEWr1_03',
'/dev/roraORCL_NEWr2_03'
) SIZE 495M,
GROUP 4 (
'/dev/roraORCL_NEWr1_04',
'/dev/roraORCL_NEWr2_04'
) SIZE 495M,
GROUP 5 (
'/dev/roraORCL_NEWr1_05',
'/dev/roraORCL_NEWr2_05'
) SIZE 495M
Modify the datafile locations as below
'/aix_conv/ORCL_COPY/u01' to '/u01/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u02' to '/u02/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u03' to '/u03/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u04' to '/u04/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u05' to '/u05/oradata/ORCL_NEW',
'/aix_conv/ORCL_COPY/u06' to '/u06/oradata/ORCL_NEW';
10. Open another session and monitor alert log for ORCL_NEW in /u01/oradump/ORCL_NEW/bdump
11. cd to /optware/oracle/work/UPGRADE_PROD directory Run the create_controfile_ORCL_NEW.sql script in your work directory
Ex: $SQLPLUS
SQL> startup nomount
SQL>@/optware/oracle/work/UPGRADE_PROD/create_controlfile_ORCL_NEW.sql
SQL> alter database open resetlogs;
12. Add temp file to PSTEMP tablespace
SQL>ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/oradata/ORCL_NEW/ORCL_NEW.temp.10.dbf' SIZE 2000M REUSE AUTOEXTEND OFF;
13. Shutdown and startup just to see if any errors in the alert log. Closely monitor the alert log during this
SQL> shutdown immediate
SQL> startup
14. Run utlirp.sql by shutting down and startup in upgrade mode.
SQL> shutdown immediate
SQL> startup upgrade;
SQL> @?/rdbms/admin/utlirp.sql
15. Run utlrp.sql after shutting down the database
SQL> shutdown immediate
SQL> startup
SQL> @ ?/rdbms/admin/utlrp.sql
16. Drop the OLD temporary tablespace TEMP
SQL> drop tablespace TEMP including datafiles and contents cascade constraints;
SQL> drop tablespace pstemp including contents and datafiles;
17. Create temporary tablespace NEW_TEMP
SQL> CREATE TEMPORARY TABLESPACE "NEW_TEMP" TEMPFILE '/dev/roraORCL_NEWt_001' SIZE 3990M REUSE ;
18. Go to /optware/oracle/work/UPGRADE_PROD and run add_temp_files_ORCL_NEW.sql to add rest of the temp files as needed
SQL> spool add_temp_files_ORCL_NEW.log
SQL> add_temp_files_ORCL_NEW.sql
20. Create block changing data file to enable incremental backups of the database
SQL> alter database enable block change tracking using file '/u13/oradata/ORCL_NEW/ORCL_NEW.bct.01.dbf' reuse;
21. Update the PS OWNER table if you database is a PeopleSoft Database in my case it is.
SQL> select * from psdbowner;
DBNAME OWNERID
-------- --------
ORCL SYSADM
SQL> update psdbowner set dbname='ORCL_NEW';
Sql> commit;
SQL> select * from psdbowner;
DBNAME OWNERID
-------- --------
ORCL_NEW SYSADM
24. Configure and bring up the listener.
What are Redo Logs, Size estimation and How to Resize and/or Add Redo logs
What are Redo Logs?
Each Oracle database has atleast one redo log. This redo log records all changes made in datafiles and allows to replay the SQL Statements. Changes are written ti Redo Logs before they are written to datafiles. In case we need to restore a data file to the time before a change was done on the database we can use the redo information stored in Redo Logs to restore the data file.
How estimate the correct size Redo Logs?
The best way to estimate the size of Redo Logs is to check in alert log and see how frequently log switch occurs and the optimal switch time should be between 20-30 minutes to get better database performance. Oracle recommends a minimum redo log size of 100MB for any production database running 8.1.7 and higher releases of Oracle.
Check in the alert log and see for messages about how frequently the log sequence# are changing, log check pointing and any incomplete redo log switch completion messages. When you examine your alert.log for log switches, you should do it when there are heavier loads in the database. Below is a sample snapshot of redo log switches in alert log:
Fri Jan 27 09:57:40 2004
ARC0: Evaluating archive log 1 thread 1 sequence 174
ARC0: Beginning to archive log 1 thread 1 sequence 174
Creating archive destination LOG_ARCHIVE_DEST_1:
'/u02/app/oracle/product/9.2.0/dbs/arch/arc_1_174.src'
ARC0: Completed archiving log 1 thread 1 sequence 174
Fri Jan 27 09:57:52 2004
Thread 1 advanced to log sequence 176
Current log# 3 seq# 176 mem# 0: /u02/oradata/redo/redo03.log
Fri Jan 27 09:57:52 2004
ARC1: Evaluating archive log 2 thread 1 sequence 175
ARC1: Beginning to archive log 2 thread 1 sequence 175
Creating archive destination LOG_ARCHIVE_DEST_1:
'/u02/app/oracle/product/9.2.0/dbs/arch/arc_1_175.src'
ARC1: Completed archiving log 2 thread 1 sequence 175
Fri Jan 27 09:58:07 2004
The above snapshot from alert log file shows that the current redo logs are not big enough.
Since the oracle recommended optimal log switch time is between 20-30minutes and if the log switch is occuring more frequently please check the v$log view for the current of redo logs and based on how frequently it is switching on an average, you will need to create redo log groups with bigger size and then switch the redo log a few times with “ALTER SYSTEM SWITCH LOGFILE;” so that the CURRENT redo log is one of the bigger redo log file and then drop the redo log groups with smaller size with "ALTER DATABASE DROP LOGFILE...." command.
The default redo log filesize on newer versions of oracle is 100MB by default and oracle recommends to spread the redo logs over multiple physical devices for optimal redo log switch performance.
Please keep in mind that too frequent checkpoints and log file switch completion problems will affect database performance. If there are waits on log file switch completion, you will need to add one or more redo log groups also to resolve this wait event in the Statspack report.
How To Resize and/or Add Redo Logs?
Note: Below steps are applicable from versions 9.2.0.1 and up
1. Review information on existing redo logs.
SQL> SELECT a.group#, b.member, a.status, a.bytes
FROM v$log a, v$logfile b
WHERE a.group#=b.group#;
2. Add new groups
SQL> ALTER DATABASE ADD LOGFILE group 4 ('/log01A.dbf', '/log01B.dbf ') SIZE 512M;
SQL> ALTER DATABASE ADD LOGFILE group 5 ('/log02A.dbf', '/log02B.dbf ') SIZE 512M;
SQL> ALTER DATABASE ADD LOGFILE group 6 ('/log03A.dbf', '/log03B.dbf ') SIZE 512M;
3. Check the status on all redo logs again.
SQL> SELECT a.group#, b.member, a.status, a.bytes FROM v$log a, v$logfile b
WHERE a.group#=b.group#4;
Drop the online redo log groups that are not needed. You must have the ALTER DATABASE system privilege.
Note: Before dropping an online redo log group, consider the following restrictions and precautions:
a. An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
b. You can drop an online redo log group only if it is INACTIVE. If you need to drop the current group, first force a log switch to occur.
c. Make sure an online redo log group is archived (if archiving is enabled) before dropping it. This can be determined by:
GROUP# ARC STATUS
--------- --- ----------------
1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES UNUSED
5 YES UNUSED
6 YES UNUSED
d.Check that the group is inactive and archived before dropping it .
SQL> SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
e. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files. For more information
refer to Note 395062.1
How to determine the optimal size for redo logs (revisited)?
You can use the V$INSTANCE_RECOVERY view column OPTIMAL_LOGFILE_SIZE to determine the size of your online redo logs. This field shows the redo log file size in megabytes that is considered optimal based on the current setting of FAST_START_MTTR_TARGET. If this field consistently shows a value greater than the size of your smallest online log, then you should configure all your online logs to be at least this size.
Note: The redo log file size affects the MTTR. In some cases, you may be able to refine your choice of the optimal FAST_START_MTTR_TARGET value by re-running the MTTR Advisor with your suggested optimal log file size.
Note: Some of the information presented in this post are in reference to Oracle Metalink notes: 1038851.6, 602066.1 and some based on knowledge. And the information was presented referring to people who do not have access to Oracle Metalink.
Each Oracle database has atleast one redo log. This redo log records all changes made in datafiles and allows to replay the SQL Statements. Changes are written ti Redo Logs before they are written to datafiles. In case we need to restore a data file to the time before a change was done on the database we can use the redo information stored in Redo Logs to restore the data file.
How estimate the correct size Redo Logs?
The best way to estimate the size of Redo Logs is to check in alert log and see how frequently log switch occurs and the optimal switch time should be between 20-30 minutes to get better database performance. Oracle recommends a minimum redo log size of 100MB for any production database running 8.1.7 and higher releases of Oracle.
Check in the alert log and see for messages about how frequently the log sequence# are changing, log check pointing and any incomplete redo log switch completion messages. When you examine your alert.log for log switches, you should do it when there are heavier loads in the database. Below is a sample snapshot of redo log switches in alert log:
Fri Jan 27 09:57:40 2004
ARC0: Evaluating archive log 1 thread 1 sequence 174
ARC0: Beginning to archive log 1 thread 1 sequence 174
Creating archive destination LOG_ARCHIVE_DEST_1:
'/u02/app/oracle/product/9.2.0/dbs/arch/arc_1_174.src'
ARC0: Completed archiving log 1 thread 1 sequence 174
Fri Jan 27 09:57:52 2004
Thread 1 advanced to log sequence 176
Current log# 3 seq# 176 mem# 0: /u02/oradata/redo/redo03.log
Fri Jan 27 09:57:52 2004
ARC1: Evaluating archive log 2 thread 1 sequence 175
ARC1: Beginning to archive log 2 thread 1 sequence 175
Creating archive destination LOG_ARCHIVE_DEST_1:
'/u02/app/oracle/product/9.2.0/dbs/arch/arc_1_175.src'
ARC1: Completed archiving log 2 thread 1 sequence 175
Fri Jan 27 09:58:07 2004
The above snapshot from alert log file shows that the current redo logs are not big enough.
Since the oracle recommended optimal log switch time is between 20-30minutes and if the log switch is occuring more frequently please check the v$log view for the current of redo logs and based on how frequently it is switching on an average, you will need to create redo log groups with bigger size and then switch the redo log a few times with “ALTER SYSTEM SWITCH LOGFILE;” so that the CURRENT redo log is one of the bigger redo log file and then drop the redo log groups with smaller size with "ALTER DATABASE DROP LOGFILE...." command.
The default redo log filesize on newer versions of oracle is 100MB by default and oracle recommends to spread the redo logs over multiple physical devices for optimal redo log switch performance.
Please keep in mind that too frequent checkpoints and log file switch completion problems will affect database performance. If there are waits on log file switch completion, you will need to add one or more redo log groups also to resolve this wait event in the Statspack report.
How To Resize and/or Add Redo Logs?
Note: Below steps are applicable from versions 9.2.0.1 and up
1. Review information on existing redo logs.
SQL> SELECT a.group#, b.member, a.status, a.bytes
FROM v$log a, v$logfile b
WHERE a.group#=b.group#;
2. Add new groups
SQL> ALTER DATABASE ADD LOGFILE group 4 ('/log01A.dbf', '/log01B.dbf ') SIZE 512M;
SQL> ALTER DATABASE ADD LOGFILE group 5 ('/log02A.dbf', '/log02B.dbf ') SIZE 512M;
SQL> ALTER DATABASE ADD LOGFILE group 6 ('/log03A.dbf', '/log03B.dbf ') SIZE 512M;
3. Check the status on all redo logs again.
SQL> SELECT a.group#, b.member, a.status, a.bytes FROM v$log a, v$logfile b
WHERE a.group#=b.group#4;
Drop the online redo log groups that are not needed. You must have the ALTER DATABASE system privilege.
Note: Before dropping an online redo log group, consider the following restrictions and precautions:
a. An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
b. You can drop an online redo log group only if it is INACTIVE. If you need to drop the current group, first force a log switch to occur.
c. Make sure an online redo log group is archived (if archiving is enabled) before dropping it. This can be determined by:
GROUP# ARC STATUS
--------- --- ----------------
1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES UNUSED
5 YES UNUSED
6 YES UNUSED
d.Check that the group is inactive and archived before dropping it .
SQL> SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
e. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files. For more information
refer to Note 395062.1
How to determine the optimal size for redo logs (revisited)?
You can use the V$INSTANCE_RECOVERY view column OPTIMAL_LOGFILE_SIZE to determine the size of your online redo logs. This field shows the redo log file size in megabytes that is considered optimal based on the current setting of FAST_START_MTTR_TARGET. If this field consistently shows a value greater than the size of your smallest online log, then you should configure all your online logs to be at least this size.
Note: The redo log file size affects the MTTR. In some cases, you may be able to refine your choice of the optimal FAST_START_MTTR_TARGET value by re-running the MTTR Advisor with your suggested optimal log file size.
Note: Some of the information presented in this post are in reference to Oracle Metalink notes: 1038851.6, 602066.1 and some based on knowledge. And the information was presented referring to people who do not have access to Oracle Metalink.
Sunday, March 28, 2010
Block Checking SQL
Below is a quick SQL to check for blocks in the database:
Note: The above is SQL is used during some of my daily tasks it can be modified for custom information retrieval. The SQL is just for a quick use if needed and for informational purposes.
set pagesize 100
set linesize 250
col module for a20
col event for a26
col last_call for a18
col objectname for a30
col username for a10
select /*+RULE */s.username username , s.sid,s.serial#, p.spid,o.OBJECT_NAME objectname, s.process,substr(s.module,1,20) module,
select /*+RULE */s.username username , s.sid,s.serial#, p.spid,o.OBJECT_NAME objectname, s.process,substr(s.module,1,20) module,
to_char(sysdate - s.last_call_et/60/60/24,'DD-MON-YYYY HH24:MI') last_call,
substr(w.event,1,24) event,
decode(block ,0,'NO','YES') BLOCKER,
decode(request,0,'NO','YES') WAITER
from v$lock l, v$session s, v$process p, v$session_wait w,v$locked_object lb, dba_objects o where
(l.request > 0 or l.block > 0 ) and s.sid=l.sid and p.addr=s.paddr and s.last_call_et > 600 and w.sid=s.sid and s.sid = lb.SESSION_ID and lb.OBJECT_ID = o.OBJECT_ID order by last_call_et desc;
Note: The above is SQL is used during some of my daily tasks it can be modified for custom information retrieval. The SQL is just for a quick use if needed and for informational purposes.
Disaster Recovery (DR) Startup and Shutdown Commands
Steps to shutdown a DR Database:
Note: Before shutting down a DR Database it is good to make sure that both Production and DR Databases are in sync.
SQL to check if both PROD and DR are in sync and the SQL needs to be executed on the PROD:
SQL> select max(sequence#), max(sequence#) "Applied log" from v$archived_log where applied='YES';
Example Output:
MAX(SEQUENCE#) APPLIED LOG
-------------- -----------
32412 32412
DR Shutdown commands:
SQL> alter database recover managed standby database cancel;
SQL> shutdown immediate;
DR Startup command:
SQL> startup nomount;
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect from session;
Note: Before shutting down a DR Database it is good to make sure that both Production and DR Databases are in sync.
SQL to check if both PROD and DR are in sync and the SQL needs to be executed on the PROD:
SQL> select max(sequence#), max(sequence#) "Applied log" from v$archived_log where applied='YES';
Example Output:
MAX(SEQUENCE#) APPLIED LOG
-------------- -----------
32412 32412
DR Shutdown commands:
SQL> alter database recover managed standby database cancel;
SQL> shutdown immediate;
DR Startup command:
SQL> startup nomount;
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect from session;
Thursday, March 25, 2010
SQL to get tablespace information
Here is a quick SQL to get information regarding tablespaces in a database:
Note: The above SQL is for informational only that I used on a daily basis
select df.tablespace_name "Tablespace",
totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",
df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace)) "Pct. Free"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files
group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments
group by tablespace_name) tu
where df.tablespace_name = tu.tablespace_name ;
Note: The above SQL is for informational only that I used on a daily basis
To Get Active Sessions and SQL Info
Here is a quick SQL to get session and SQL information:
select /*+RULE */ SID,SPID,ses.SERIAL#,ses.USERNAME,ses.module,SQL_TEXT,pro.TERMINAL
from v$sqlarea sql, v$session ses,v$process pro where HASH_VALUE=SQL_HASH_VALUE and status = 'ACTIVE' and
ses.username is not null and
ses.PADDR = pro.ADDR
order by sql_text;The above SQL statement can be customized based on requirement and if you need more information.
Note: The above SQL is for informational only that I used on a daily basis
How to get constraint information
Below is a quick SQL to get all constraint information on a particular table:
SELECT cols.table_name, cols.column_name, cols.position, cons.status, cons.owner, cons.constraint_type FROM all_constraints cons, all_cons_columns cols
WHERE cols.table_name = 'DEPT'
AND cons.constraint_name = cols.constraint_name
AND cons.owner = cols.owner
ORDER BY cols.table_name, cols.position;
i. For cols.table_name you can give table for which you want retrieve the constraint information.
ii. For information on particular type constraints only on a table i.e., if you need the information regarding only
"Primary Key" constraints use below SQL:
SELECT cols.table_name, cols.column_name, cols.position, cons.status, cons.owner
FROM all_constraints cons, all_cons_columns cols
WHERE cols.table_name = 'DEPT'
AND cons.constraint_type = 'P'
AND cons.constraint_name = cols.constraint_name
AND cons.owner = cols.owner
ORDER BY cols.table_name, cols.position;Below are some of different options that you can give for cols.table_name to information regarding particular type of constraints on a table. Type of constraint definition:
•C (check constraint on a table)
•P (primary key)
•U (unique key)
•R (referential integrity)
•V (with check option, on a view)
•O (with read only, on a view)
Note: The above SQL's are for informational only that I used on a daily basis
Subscribe to:
Posts (Atom)