Sukant

Sunday, April 11, 2010

Windows to Linux Migration

--------------------------------------------------------------------------------------------------------------------------------------------

Cross-Platform Migration on Destination Host Using Rman Convert Database [ID 414878.1]

--------------------------------------------------------------------------------------------------------------------------------------------

migration from Windows 32-bit to Linux 32-bit:
-------------------------------------------------

Note the following restrictions exist with Cross-Platform Transportable Database:
----------------------------------------------------------------------------------

1) The principal restriction on cross-platform transportable database is that the source and destination platform must share the same endian format.
2) Redo log files and control files from the source database are not transported.
New control files and redo log files are created for the new database during the transport process,
and an Open Resetogs is performed once the new database is created. Similarly,
tempfiles belonging to locally managed temporary tablespaces are not transported.
The temporary tablespace will be re-created on the target platform when the transport script is run.
3) BFiles, External tables and directories, Password files are not transported.
4) The Source and the target database version must be equal / greater than 10.2.0. version


--------------------------------------------------------------------------------------------------------------------------------------------


1) Check that the source and destination platform belong to same ENDIAN format in the view V$TRANSPORTABLE_PLATFORM.
We will try to transport a database from Windows (32-bit) to Linux (32-bit). As seen below both belong to Little endian format:


SQL> select PLATFORM_NAME, ENDIAN_FORMAT from V$TRANSPORTABLE_PLATFORM;

PLATFORM_NAME ENDIAN_FORMAT
---------------------------------------- --------------
Solaris[tm] OE (32-bit) Big
Solaris[tm] OE (64-bit) Big
Microsoft Windows IA (32-bit) Little
Linux IA (32-bit) Little
AIX-Based Systems (64-bit) Big
HP-UX (64-bit) Big
HP Tru64 UNIX Little
HP-UX IA (64-bit) Big
Linux IA (64-bit) Little
HP Open VMS Little
Microsoft Windows IA (64-bit) Little
IBM zSeries Based Linux Big
Linux 64-bit for AMD Little
Apple Mac OS Big
Microsoft Windows 64-bit for AMD Little
Solaris Operating System (x86) Little
IBM Power Based Linux Big

17 rows selected.

SQL>SELECT USERENV ('language') FROM DUAL;


USERENV('LANGUAGE')
----------------------------------------------------
AMERICAN_AMERICA.WE8ISO8859P1


SQL> select * from nls_database_parameters;

PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET WE8ISO8859P1
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM

PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 10.2.0.1.0

20 rows selected.

2) Open the database in read only mode

SQL> alter database open read only;

Database altered.


3) Use DBMS_TDB.CHECK_DB to check whether the database can be transported to a desired destination platform,
and whether the current state of the database (such as incorrect compatibility settings, in-doubt or active transactions) permits transport.
Make sure your database is open in READ ONLY mode before you call DBMS_TDB.CHECK_DB.
Since we need to transport to Linux we will call the procedure with following arguments:


SQL> set serveroutput on
SQL> declare
db_ready boolean;
begin
db_ready := dbms_tdb.check_db('Linux IA (32-bit)');
end;
/

PL/SQL procedure successfully completed.


If you call DBMS_TDB.CHECK_DB and no messages are displayed indicating conditions preventing transport
BEFORE the "PL/SQL procedure successfully completed message", then your database is ready for transport.

4) Use DBMS_TDB.CHECK_EXTERNAL to identify any external tables, directories or BFILEs. RMAN cannot automate the transport of such files as mentioned above.

SQL> set serveroutput on
SQL> declare
external boolean;
begin
external := dbms_tdb.check_external;
end;
/

The following directories exist in the database:
SYS.DATA_PUMP_DIR

PL/SQL procedure successfully completed.


SQL> alter database backup controlfile to trace;


5) Now once the database is ready for transport , make sure as to where you are going to convert the datafiles.
You may choose to convert the datafiles of the database being transported on the destination platform instead of the source platform.
Reasons for doing this include:


+ Avoiding any performance overhead on the source host due to the conversion process.
+ Distributing a database from one source system to multiple recipients on several different platforms.


6) Run the RMAN CONVERT DATABASE command on the source platform specifying the ON TARGET PLATFORM argument.
When used in this manner, the command syntax is as follows:


create a folder in OS level temporarily to place the script. EXAMPLE: 'E:\mig\'

C:\ Rman target sys/prod@prod nocatalog


RMAN>CONVERT DATABASE ON TARGET PLATFORM
CONVERT SCRIPT 'E:\mig\convertscript.rman'
TRANSPORT SCRIPT 'E:\mig\transportscript.sql'
new database 'orcllnx'
FORMAT 'E:\mig\%U';


Starting convert at 12-FEB-07
using channel ORA_DISK_1

Directory SYS.DATA_PUMP_DIR found in the database

User SYS with SYSDBA and SYSOPER privilege found in password file
channel ORA_DISK_1: starting to check datafiles
input datafile fno=00001 name=D:\ORACLE\ORADATA\ORA10G\DATA\SYSTEM.DBF
channel ORA_DISK_1: datafile checking complete, elapsed time: 00:00:00
channel ORA_DISK_1: starting to check datafiles
input datafile fno=00002 name=D:\ORACLE\ORADATA\ORA10G\DATA\UNDOTBS1.DBF
channel ORA_DISK_1: datafile checking complete, elapsed time: 00:00:00
channel ORA_DISK_1: starting to check datafiles
input datafile fno=00003 name=D:\ORACLE\ORADATA\ORA10G\DATA\SYSAUX.DBF
channel ORA_DISK_1: datafile checking complete, elapsed time: 00:00:00
channel ORA_DISK_1: starting to check datafiles
input datafile fno=00004 name=D:\ORACLE\ORADATA\ORA10G\DATA\USERS.DBF
channel ORA_DISK_1: datafile checking complete, elapsed time: 00:00:00
Run SQL script C:\TRANSPORTSCRIPT.SQL on the target platform to create database
Edit init.ora file C:\INIT_00I9SC77_1_0.ORA. This PFILE will be used to create the database on the target platform
Run RMAN script C:\CONVERTSCRIPT.RMAN on target platform to convert datafiles
To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the target platform
To change the internal database identifier, use DBNEWID Utility
Finished backup at 12-FEB-07



The CONVERT DATABASE ON TARGET PLATFORM produces a transport script containing SQL*Plus commands
to create a new database on the destination platform, and a PFILE for the new database containing
the same settings as the source database.The pfile will be generated in location specified by the Format command.

Note: that CONVERT DATABASE ON TARGET PLATFORM does not produce converted datafile copies.


7) Now copy the following files to the destination machine manually :


+ Datafiles(here i moved to '/ora10gdata/orcllnx/mig' location )
+ Transport.sql
+ Convertscript.rman
+ Pfile generated by the convert database command.
+ password file
+ udump trace file(alter database backup controlfile to trace)

8) Edit the PFILE moved on the destination host to change the environment specific parameters.Change the password file name to a new database name.

9) Create a dummy Controlfile (from udump trace file)


sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Feb 12 20:10:52 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 201326592 bytes
Fixed Size 1218484 bytes
Variable Size 67110988 bytes
Database Buffers 125829120 bytes
Redo Buffers 7168000 bytes

SQL>CREATE CONTROLFILE REUSE SET DATABASE 'orcllnx' RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/ora10gdata/orcllnx/mig/REDO01.LOG' SIZE 50M,
GROUP 2 '/ora10gdata/orcllnx/mig/REDO02.LOG' SIZE 50M,
GROUP 3 '/ora10gdata/orcllnx/mig/REDO03.LOG' SIZE 50M
-- STANDBY LOGFILE
-- GROUP 4 '/ora10gdata/orcllnx/mig/REDO04.LOG' SIZE 50M,
-- GROUP 5 '/ora10gdata/orcllnx/mig/REDO05.LOG' SIZE 50M,
-- GROUP 6 '/ora10gdata/orcllnx/mig/REDO06.LOG' SIZE 50M
DATAFILE
'/ora10gdata/orcllnx/mig/SYSTEM01.DBF',
'/ora10gdata/orcllnx/mig/UNDOTBS01.DBF',
'/ora10gdata/orcllnx/mig/SYSAUX01.DBF',
'/ora10gdata/orcllnx/mig/USERS01.DBF'
CHARACTER SET WE8MSWIN1252
;


Note : 1) This datafiles location is where u have copied the datafiles from windows to linux temporarily.
----- EX: '/ora10gdata/orcllnx/mig' --> temporary location

2) USE , 'REUSE'-'SET' commands in I'st line of creation of controlfile script.


10) Now edit the file Convertscript.rman and make necessary changes with respect to the filesystem and the file names.
Now once the changes are done run the script from rman prompt


[oracle@localhost ~]$ rman target / nocatalog @/ora10gdata/orcllnx/mig/CONVERTSCRIPT.RMAN

Recovery Manager: Release 10.2.0.1.0 - Production on Sun Apr 11 18:43:36 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ORCLLNX (DBID=127554213, not open)
using target database control file instead of recovery catalog

RMAN> RUN {
2>
3> CONVERT DATAFILE '/ora10gdata/orcllnx/mig/SYSTEM01.DBF'
4> FROM PLATFORM 'Microsoft Windows IA (32-bit)'
5> FORMAT '/ora10gdata/orcllnx/SYSTEM01.DBF' ;
6>
7>
8> CONVERT DATAFILE '/ora10gdata/orcllnx/mig/SYSAUX01.DBF'
9> FROM PLATFORM 'Microsoft Windows IA (32-bit)'
10> FORMAT '/ora10gdata/orcllnx/SYSAUX01.DBF' ;
11>
12>
13> CONVERT DATAFILE '/ora10gdata/orcllnx/mig/UNDOTBS01.DBF'
14> FROM PLATFORM 'Microsoft Windows IA (32-bit)'
15> FORMAT '/ora10gdata/orcllnx/UNDOTBS01.DBF' ;
16>
17>
18> CONVERT DATAFILE '/ora10gdata/orcllnx/mig/USERS01.DBF'
19> FROM PLATFORM 'Microsoft Windows IA (32-bit)'
20> FORMAT '/ora10gdata/orcllnx/USERS01.DBF' ;
21>
22>
23> }
24>
25>
Starting backup at 11-APR-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=155 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input filename=/ora10gdata/orcllnx/mig/SYSTEM01.DBF
converted datafile=/ora10gdata/orcllnx/SYSTEM01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:46
Finished backup at 11-APR-10

Starting backup at 11-APR-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input filename=/ora10gdata/orcllnx/mig/SYSAUX01.DBF
converted datafile=/ora10gdata/orcllnx/SYSAUX01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:26
Finished backup at 11-APR-10

Starting backup at 11-APR-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input filename=/ora10gdata/orcllnx/mig/UNDOTBS01.DBF
converted datafile=/ora10gdata/orcllnx/UNDOTBS01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03
Finished backup at 11-APR-10

Starting backup at 11-APR-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input filename=/ora10gdata/orcllnx/mig/USERS01.DBF
converted datafile=/ora10gdata/orcllnx/USERS01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Finished backup at 11-APR-10



10) Now shutdown the database and delete the dummy controlfile.

11) Now edit the TRANSPORT sql script to reflect the new path for datafiles and redolog files in the CREATE CONTROLFILE section of the script.
Also change all references to the INIT.ORA in the script to the new path and name of the INIT.ORA modified above.


12) Once the PFILE and TRANSPORT sql scripts are suitably modified invoke SQLPLUS on the destination host
after setting the Oracle environment parameters and then run TRANSPORT.sql as:


oracle@test-br ora10g]$ export ORACLE_HOME=/ora10gsoft/10.2.0/
[oracle@test-br ora10g]$ export ORACLE_SID=orcllnx
[oracle@test-br ora10g]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@test-br LinDB10g]$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Sat Feb 3 01:55:46 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> @TRANSPORTSCRIPT.SQL
ORACLE instance started.

Total System Global Area 201326592 bytes
Fixed Size 1218484 bytes
Variable Size 67110988 bytes
Database Buffers 125829120 bytes
Redo Buffers 7168000 bytes

Control file created.


Database altered.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Your database has been created successfully!
* There are many things to think about for the new database. Here
* is a checklist to help you stay on track:
* 1. You may want to redefine the location of the directory objects.
* 2. You may want to change the internal database identifier (DBID)
* or the global database name for this database. Use the
* NEWDBID Utility (nid).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Database closed.
Database dismounted.
ORACLE instance shut down.
ORACLE instance started.

Total System Global Area 201326592 bytes
Fixed Size 1218484 bytes
Variable Size 67110988 bytes
Database Buffers 125829120 bytes
Redo Buffers 7168000 bytes
Database mounted.
Database opened.
...
...


--------------------------------------------------------------------------------------------------------------------------------------------


SAMPLE MODIFIED 'TRANSPORT.SQL' FILE

--------------------------------------------------------------------------------------------------------------------------------------------



-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- The contents of online logs will be lost and all backups will
-- be invalidated. Use this only if online logs are damaged.

-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY

STARTUP NOMOUNT PFILE='/ora10gsoft/10.2.0/dbs/initorcllnx.ora'
CREATE CONTROLFILE REUSE SET DATABASE 'orcllnx' RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/ora10gdata/orcllnx/REDO01.LOG' SIZE 50M,
GROUP 2 '/ora10gdata/orcllnx/REDO02.LOG' SIZE 50M,
GROUP 3 '/ora10gdata/orcllnx/REDO03.LOG' SIZE 50M
-- STANDBY LOGFILE
-- GROUP 4 '/ora10gdata/orcllnx/REDO04.LOG' SIZE 50M,
-- GROUP 5 '/ora10gdata/orcllnx/REDO05.LOG' SIZE 50M,
-- GROUP 6 '/ora10gdata/orcllnx/REDO06.LOG' SIZE 50M
DATAFILE
'/ora10gdata/orcllnx/SYSTEM01.DBF',
'/ora10gdata/orcllnx/UNDOTBS01.DBF',
'/ora10gdata/orcllnx/SYSAUX01.DBF',
'/ora10gdata/orcllnx/USERS01.DBF'
CHARACTER SET WE8MSWIN1252
;

-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;

-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP01 ADD TEMPFILE '/ora10gdata/orcllnx/TEMP01.DBF'
SIZE 20971520 AUTOEXTEND OFF;
-- End of tempfile additions.
--

create spfile from pfile;

set echo off
prompt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
prompt * Your database has been created successfully!
prompt * There are many things to think about for the new database. Here
prompt * is a checklist to help you stay on track:
prompt * 1. You may want to redefine the location of the directory objects.
prompt * 2. You may want to change the internal database identifier (DBID)
prompt * or the global database name for this database. Use the
prompt * NEWDBID Utility (nid).
prompt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SHUTDOWN IMMEDIATE
STARTUP UPGRADE
@@ ?/rdbms/admin/utlirp.sql
SHUTDOWN IMMEDIATE
STARTUP
-- The following step will recompile all PL/SQL modules.
-- It may take serveral hours to complete.
@@ ?/rdbms/admin/utlrp.sql
set feedback 6;



Note: If temporary tablespace is locally managed the drop and recreate the temporary tabelspace;
-----


SQL> drop tablespace temp including contents and datafiles;
drop tablespace temp including contents and datafiles
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace


SQL> Create temporary TABLESPACE TEMP01 TEMPFILE '/ora10gdata/orcllnx/mig/TEMP01.DBF'
SIZE 20971520 AUTOEXTEND OFF;

Tablespace created.

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp01;

Database altered.

SQL> DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

--------------------------------------------------------------------------------------------------------------------------------------------

Friday, April 9, 2010

Data Guard broker config

REFERENCE:
-----------

DATGUARD CREATION:
-------------------
http://www.oracledba.org/10g/dr/10gR2_dataguard_on_Linux.html

DATAGUARD BROKER CREATION
--------------------------
http://www.orafaq.com/wiki/Data_Guard_implementation_for_Oracle_10gR2
http://www.oracle.com/technology/pub/articles/smiley-fsfo.html
http://www.oraclecommunity.net/profiles/blogs/configuration-of-10g-data
http://www.ritzyblogs.com/OraTalk/EntryId/122/How-to-setup-DGMGRL-broker-with-example


********************************************************************************************************************************

PREREQUESTS:
-----------------

1.database should be in force logging mode.
2.database should be in archive log mode
3.init parameter dg_broker_start=true
4. _DGMGRL in listener file
5.standby logfiles on both the sides.
6.init parameter Log_archive_config='DG_CONFIG=(primary,standby)'
7.Verify ur dataguard setup before dg-broker. If any errors solve it.


Now on the PRIMARY site check the status of Standby Archiving destination.

SQL> Select Status, Error from v$Archive_dest where dest_id=2;

The STATUS should return – VALID. If it returns Error, then check the connectivity between the Primary and Standby machines.


On the PRIMARY Database: (Open stage)

Query V$DATABASE to check the role of Primary Database.

SQL> Select Database_role from v$Database;
It will return "PRIMARY",

Now check the Switchover Status of the Primary Database.

SQL> Select switchover_status from v$Database;
It will return "SESSIONS ACTIVE".

On the PHYSICAL STANDBY Database: (Mount stage) Query V$DATABASE to check the role of Standby Database.

SQL> Select Database_role from v$Database;
It will return "PHYSICAL STANDBY",

Now check the Switchover Status of the Standby Database.

SQL> Select switchover_status from v$Database;
It will return "SESSIONS ACTIVE".


********************************************************************************************************************************


IF U USE FSFO(FAST START FAILOVER) only
---------------------------------------

1.flashback Database should be enabled.
2.fast start failover should be eanabled in DGMGRL
3.start the observer in third computer.
4.oracle version should be 10.2 or higher


********************************************************************************************************************************

1. Set up init parameters on primary to enable broker
--------------------------------------------------------

NOTE :(IF u planning to use Automatic failover(FSFO)please refer prerequets of FSFO before configuring the follows)

SQL> alter system set dg_broker_start=false sid='*';
System altered.

SQL>Alter database add standby logfile
('C:\oracle\product\10.2.0\oradata\ORCL\StandbyRedo.log') size 50m;(It Should be same size of online redo log files)

SQL> alter system set dg_broker_config_file1='/n01/dg_broker_config_files/dr1PRIMARY.dat' sid='*';
System altered.

SQL> alter system set dg_broker_config_file2='/n01/dg_broker_config_files/dr2PRIMARY.dat' sid='*';
System altered.

SQL> alter system set dg_broker_start=true sid='*';
System altered.




2. Verify if DMON process has started on all the instances of primary.
-----------------------------------------------------------------------
Example:

$ ps -ef|grep dmon|grep -v grep
oracle 16190 1 0 08:53 ? 00:00:00 ora_dmon_PRIMARY

$ ps -ef|grep dmon|grep -v grep
oracle 29723 1 0 08:53 ? 00:00:00 ora_dmon_PRIMARYR2



3. Set up init parameters on standby
-------------------------------------

SQL> alter system set dg_broker_start=false sid='*';
System altered.

SQL> alter system set dg_broker_config_file1='/export/crawlspace/dg_broker_config_files/dr1STANDBY.dat' sid='*';
System altered.

SQL> alter system set dg_broker_config_file2='/export/crawlspace/dg_broker_config_files/dr2STANDBY.dat' sid='*';
System altered.

SQL> alter system set dg_broker_start=true sid='*';
System altered.



4. Setting up the listener
-----------------------------

GLOBAL_DBNAME should be set to <>_DGMGRL.<> in listener.ora on all instances of both primary and standby.
This is important otherwise you'll have TNS-12154 error during switchover operation.

Example:

SID_LIST_LISTENER_PRIMARY =
(SID_LIST =
(SID_DESC =
(SID_NAME = PRIMARY)
(GLOBAL_DBNAME = PRIMARY_DGMGRL)
(ORACLE_HOME = /apps/oracle/product/10g/db)
)
(SID_DESC =
(SID_NAME = STANDBY)
(GLOBAL_DBNAME = STANDBY_DGMGRL)
(ORACLE_HOME = /apps/oracle/product/10g/db)
)
)




5. DGMGRL Configuration
------------------------

5.1Connect
-------------

DGMGRL> CONNECT sys/sys@primary
Connected.


5.2Create Configuration
------------------------

DGMGRL> CREATE CONFIGURATION 'DG_TEST' AS PRIMARY DATABASE IS 'PRIMARY' CONNECT IDENTIFIER IS PRIMARY;
Configuration "DG_TEST" created with primary database "PRIMARY".


DG_TEST --> any name.


5.3 Add standby database to the configuration
-----------------------------------------------

DGMGRL> ADD DATABASE 'STANDBY' AS CONNECT IDENTIFIER IS STANDBY MAINTAINED AS PHYSICAL;
Database "STANDBY" added.


5.4Verifying everything
---------------------------
DGMGRL> SHOW CONFIGURATION;
Configuration
Name: DG_TEST
Enabled: YES
Protection Mode: MaxPerformance
Databases:
PRIMARY - Primary database
STANDBY - Physical standby database

Current status for "DG_TEST":
SUCCESS

Note:If u doesn't get "SUCCESS" & if any issue kindly refer TROUBLESHOOT chapter --> NO :6
-----

DGMGRL>SHOW DATABASE VERBOSE 'prod' ;

DGMGRL>SHOW DATABASE VERBOSE 'stand' ;


Note : It should give result as SUCCESS.And check for logxptmode,standbyarchivelocation,nettimeout
------

DGMGRL>edit database prod set property 'logxptmode'='SYNC';

DGMGRL>edit database stand set property 'logxptmode'='SYNC';




DGMGRL>edit database prod set property 'standbyarchivelocation'='C:\oracle\product\oradata\arch_stand';

DGMGRL>edit database stand set property 'standbyarchivelocation'='C:\oracle\product\10.2.0\oradata\arch_prod';



DGMGRL>edit database prod set property 'nettimeout'='10';

DGMGRL>edit database stand set property 'nettimeout'='10';



5.5Verify database; if RAC verify if all instances are validated
-----------------------------------------------------------------

DGMGRL> show database 'PRIMARY';
Database
Name: PRIMARY
Role: PRIMARY
Enabled: NO
Intended State: ONLINE
Instance(s):
PRIMARY
PRIMARYR2

Current status for "PRIMARY":
DISABLED



5.6 Enable the broker
----------------------
DGMGRL> ENABLE CONFIGURATION;
Enabled.


********************************************************************************************************************************

Switchover:
--------------

Now we are ready to switch over the PRIMARY database Role to STANDBY database Role.

DGMGRL> switchover to 'Standby';
…..
…..
Primary Database Successfully converted to Physical Standby. You can again switch over the Standby Database to Primary by executing following command.

DGMGRL> switchover to 'Primary';
…..
…..
Standby Database is successfully converted to Primary Database.


Failover: (Manual)
------------------

Failover can be done through the same configuration without any alteration. You simply need to execute following command:

DGMGRL> failover to 'Standby';
….
Failover to standby succeeded.
And also…

DGMGRL> failover to 'Primary';
….
Failover to primary succeeded.



********************************************************************************************************************************



--------------------------
AUTOMATIC FAILOVER(FSFO)
--------------------------


IF U USE FSFO(FAST START FAILOVER) only
---------------------------------------

1.flashback Database should be enabled.
2.fast start failover should be eanabled in DGMGRL
3.start the observer in third computer.
4.oracle version should be 10.2 or higher

DATAGUARD OBSERVER :
-----------------------

Observer is a utility that is available with the dataguard. Its basic purpose is to keep the database available to the
clients all the time. It is started on a separate location other than Primary and Standby locations. After starting the observer,
it starts watching continuously both the sites. Whenever the PRIMARY database fails due to any reason and the connection between
the PRIMARY site and the observer breaks, the observer waits for a certain time which is set up during setting the broker configuration.
When this time passes, the observer automatically starts the failover process. As a result, the Physical Standby database is converted
to PRIMARY database. After this failover, the users again connect to the database. This connection redirects them automatically to the
new PRIMARY database.


OBSERVER CONFIGURATION:
-------------------------

There are some important configuration settings that are needed to be set before starting the Dataguard (DG) Observer.
Main thing is to set the STANDBY database to Maximum Availability protection mode. DG broker configuration
must also be set to "MaxAvailability" protection mode. After this we need to enable "fast_start failover" parameter of the broker configuration.
Without setting these options, broker will not allow to proceed for observer to work for remote failover. Following are the steps to enable the observer.


OBSERVER SITE: (Information)
------------------------------------

OBSERVER Machine IP: 192.168.1.65
PRIMARY Machine IP: 192.168.1.46
STANDBY Machine IP: 192.168.1.96

TNS Service Name for PRIMARY site: to_primary (Through this service, the observer machine will be connected to PRIMARY machine)

TNS Service Name for STANDBY site: to_standby (Through this service, the observer machine will be connected to STANDBY machine)

(NOTE: Fresh DG broker configuration will be created for implementation of Observer. Therefore, remove any previously created configuration.)




Step 1:
--------

Create TNS services on the observer machine for PRIMARY site and STANDBY site through which the observer machine
will be connected to the primary and standby databases.

TNS service name for PRIMARY site is 'primary'. TNS service name for STANDBY site is 'to_standby'.

Step 2:
---------
Before starting the broker CLI (DGMGRL) we have to enable FLASHBACK on both the Primary and Standby databases.

At mount stage execute:

SQL> Alter database Flashback on;
Database altered.

Step 3:
--------
On the STANDBY site, edit Log_Archive_Dest_2 parameter and add three properties i.e. LGWR AFFIRM SYNC. These properties are needed
to bring the broker protection mode to MaxAvailability.

SQL> Alter System set Log_archive_dest_2='Service=primary
LGWR AFFIRM SYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
db_unique_name=primary';
System altered.

Step 4:
---------
Now execute the following command on the primary database to change the protection mode from MAXIMUM PERFORMANCE to MAXIMUM AVAILABILITY.

SQL> Alter database set primary database to maximize availability;
Shut down the PRIMARY database and then restart it. Check the protection mode on both PRIMARY and STANDBY databases.

SQL> select protection_mode from v$database.
It should return MAXIMUM AVAILABILITY.

Step 5:
--------
Now start the DG CLI (DGMGRL) to start the broker.

C:\> dgmgrl
DGMGRL for 32-bit Windows: Version 10.2.0.1.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information. DGMGRL>

DGMGRL> connect sys/oracle@primary
Connected.
Check any previously created configuration and remove it.

DGMGRL> show configuration;
If a configuration is displayed then remove it.

DGMGRL> remove configuration;
Removed.
Now create a new configuration.

DGMGRL> create configuration 'broker1' as
primary database is 'primary'
connect identifier is primary;
Configuration "broker1" created with primary database "primary"



DGMGRL> add database 'standby' as
connect identifier is STANDBY
maintained as physical;
Database "broker1" added



DGMGRL> show configuration;
Configuration
Name: broker1
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLE
Databases:
primary - Physical standby database
standby - Primary database
Current status for "broker1":
DISABLE



DGMGRL> enable configuration;
Enabled.



DGMGRL> show configuration;
Configuration
Name: broker1
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLE
Databases:
primary - Physical standby database
standby - Primary database
Current status for "broker1":
SUCCESS

DGMGRL> edit database PRIMARY set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database STANDBY set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.
DGMGRL>edit database prod set property 'standbyarchivelocation'='C:\oracle\product\oradata\arch_stand';

DGMGRL>edit database stand set property 'standbyarchivelocation'='C:\oracle\product\10.2.0\oradata\arch_prod';



DGMGRL>edit database prod set property 'nettimeout'='10';

DGMGRL>edit database stand set property 'nettimeout'='10';


DGMGRL>EDIT DATABASE prod SET PROPERTY FastStartFailoverTarget='stand';

DGMGRL>EDIT DATABASE stand SET PROPERTY FastStartFailoverTarget='prod';



DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show configuration;
Configuration
Name: dg1
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
primary - Physical standby database
- Fast-Start Failover target
standby - Primary database
Current status for "dg1": Warning: ORA-16608: one or more databases have warnings

If you check this error by querying StatusReport property of the PRIMARY database,
you will find that Fast Start Failover OBSERVER is not started. Simply start the observer.

DGMGRL> show database primary StatusReport
Now start the observer.

DGMGRL> start observer (In a third machine apart from primary & standby)
Observer Started.

It seems to be hanged.but no issue

The Observer has started its working. The current session of 'cmd' will not be returned to you because it is a foreground process.
It will continuously observe PRIMARY site. To check the configuration, open another 'cmd' window and check the configuration.

DGMGRL> show configuration;
Configuration
Name: broker1 Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: Enabled
Databases:
primary -- Primary database
standby -Physical standby database - Fast – Start Failover target


Fast-Start Failover
Threshold: 30 seconds Observer: orat

Current status for "broker1":
SUCCESS
___________________________________________

At this stage every thing is going on normally. An identical copy of PRIMARY database is being made at the STANBY site.
If due to any reason, PRIMARY database crashes, the observer will start its working.

DGMGRL> start observer
Observer started.
15:21:21.69 Wednesday, March 07, 2007
Initiating fast-start failover to database "standby"…
Performing failover NOW, please wait…
Operation requires shutdown of instance "orcl" on database "standby"
Shutting down instance "orcl"…
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "orcl" on database "standby" Starting instance "orcl"…

********************************************************************************************************************************


6. Troubleshooting
-------------------

REF : http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10822/cli.htm

Let us see some sample issues and their fix Issue

DGMGRL> CONNECT sys/sys
ORA-16525: the Data Guard broker is not yet available

Fix
Set dg_broker_start=true

Issue
After enabling the configuration, on issuing SHOW CONFIGURATION, this error comes
Warning: ORA-16608: one or more sites have warnings

Fix
To know details of the error, you may check log which will be generated at bdump with naming as drc{DB_NAME}.log or there are various monitorable properties that can be used to query the database status and assist in further troubleshooting.

Few Monitorable properties to troubleshoot
DGMGRL> SHOW DATABASE 'PRIMARY' 'StatusReport';
DGMGRL> SHOW DATABASE 'PRIMARY' 'LogXptStatus';
DGMGRL> SHOW DATABASE 'PRIMARY' 'InconsistentProperties';
DGMGRL> SHOW DATABASE 'PRIMARY' 'InconsistentLogXptProps';
DGMGRL> SHOW DATABASE 'STANDBY' 'StatusReport';
DGMGRL> SHOW DATABASE 'STANDBY' 'LogXptStatus';
DGMGRL> SHOW DATABASE 'STANDBY' 'InconsistentProperties';
DGMGRL> SHOW DATABASE 'STANDBY' 'InconsistentLogXptProps';

Issue
DGMGRL> SHOW DATABASE 'PRIMARY' 'StatusReport';
STATUS REPORT
INSTANCE_NAME SEVERITY ERROR_TEXT
PRIMARYR2 WARNING ORA-16714: The value of property ArchiveLagTarget is inconsistent with the database setting.
PRIMARYR2 WARNING ORA-16714: The value of property LogArchiveMaxProcesses is inconsistent with the database setting.

Issue
DGMGRL> SHOW DATABASE 'PRIMARY' 'InconsistentProperties';
INCONSISTENT PROPERTIES
INSTANCE_NAME PROPERTY_NAME MEMORY_VALUE SPFILE_VALUE BROKER_VALUE
PRIMARYR2 ArchiveLagTarget 0 0
PRIMARYR2 LogArchiveMaxProcesses 4 2 4

Example
DGMGRL> SHOW DATABASE 'PRIMARY' 'LogArchiveMaxProcesses';
LogArchiveMaxProcesses = '4'

Fix
DGMGRL> EDIT DATABASE 'PRIMARY' SET PROPERTY 'LogArchiveMaxProcesses'=2;

or

SQL> alter system set log_archive_max_processes=4 scope=spfile sid='*';
System altered.

DGMGRL> SHOW DATABASE 'PRIMARY' 'LogArchiveMaxProcesses';
LogArchiveMaxProcesses = '4'

More commands
DGMGRL> SHOW DATABASE VERBOSE 'dbname';
This will show all property values in detail

DGMGRL> HELP;
List of all broker commands with usage help

Equivalent Broker Commands to 'ALTER SYSTEM'
SQL> alter database recover managed standby database cancel;
DGMGRL> edit database 'stby_dbname' set state='LOG-APPLY-OFF';

SQL> alter database recover managed standby database disconnect;
DGMGRL> edit database 'stby_dbname' set state='ONLINE';

SQL> alter system set log_archive_max_processes=4;
DGMGRL> edit database 'dbname' set property 'LogArchiveMaxProcesses'=4;

SQL> alter system set log_archive_dest_state_2='enable' scope=both;
DGMGRL> edit database 'stby_dbname' set property 'LogShipping'='ON';

SQL> alter system set log_archive_dest_state_2='defer' scope=both;
DGMGRL> edit database 'stby_dbname' set property 'LogShipping'='OFF';

DGMGRL> edit database 'pri_dbname' set state='LOG-TRANSPORT-OFF';
This will defer all standby databases


That's it my friends. You are all set to use broker now for managing your dataguard configuration.

Friday, March 26, 2010

performance tuning tips

Oracle
1. Please avoid SQL built –in function such as TO_CHAR, UPPER etc. functions in SQL statements
2. Please try to put joins between index columns only
3. Arrange the order of WHERE condition in such a way to fetch less number of rows.
4. Use BULK techniques for handing of huge amount of data
5. Use cursors always in PL/SQL program even if it fetches one row.
6. Use FOR loop instead of OPEN cursors
7. Use ‘commit’ immediately after ‘UPDATE’ statements.
8. Avoid infinite loops in SQL statements.
9. Please put very Min. number of constraints in database tables.
10. Avoid database triggers
11. Use ‘surrogate keys’ (Self join keys) for hierarchy representations

Performance Tuning Tips

Tuning Methodology:

The Tuning methodology involves five steps as shown below.

1. Analyze the system. Decide if there is a problem; if there is a problem, what is it?
2. Determine the problem. What do you think is causing the problem? Why?
3. Determine a solution and set goals. Decide what you want to try and what you think will result from your changes. What is it you want to accomplish? Do you want more throughput? Faster response times? what?
4. Test the solution. Try it. See what happens.
5. Analyze the results. Did the solution meet the goals? If not, you may have to go back to step 1, 2, or 3.

The following sections describe each of these steps individually and explain why each step is important to the final outcome.

1. Examine the Problem:

The first phase in tuning the system should always be the determination phase. First, see whether you even have a problem. You may just be at the limits of the configuration. To make sure, examine the system, look for problems, and make this determination logically.
The determination phase consists of examining the system as it currently is, looking for problems or bottlenecks, and identifying “hot spots.” Keep in mind that performance problems may not always be obvious.






Here is a partial list of some areas you should examine:

• Application code. The problem may consist of an application that executes excessive code or performs table scans when they are not necessary.

 Oracle database engine. The problem may be an improperly sized System Global Area (SGA) or some other tuning parameters.
• Operating System parameters. The OS may not be configured properly and may be
starving Oracle of needed resources.
• Hardware configuration. The layout of the database may not be efficient and may be causing a disk bottleneck.
• Network. The network may be overloaded and causing excessive collisions and delays.

To categorize performance issues into the following three classes:

• It’s broken. Performance is severely handicapped because of a configuration error or an incorrect parameter in the OS or RDBMS. Problems in this category are usually oversights during the system build (such as incorrectly building an index or forgetting to enable Asynchronous I/O).
• It’s not optimized. Performance is slightly degraded because of a small miscalculation in parameters or because system capacity is slightly exceeded. These types of problems are usually easily solved by fine tuning the configurations.
• Not a problem. Don’t forget that sometimes there isn’t a problem—you have just reached the capacity of the system. This situation can easily be solved by upgrading or adding more capacity. Not all problems can be solved with tuning.
For the first class of problem, you may have to take drastic action. Tuning may solve the second class of problem. For the third class of problem, you need take no action.

Here are some questions to ask yourself when examining the system and searching for problems:

• Are you getting complaints from the user community?
• Are some operations taking much longer than the same operations did in the past?
• Is CPU usage low but I/O usage high?
• Are you seeing excessive response times?
• Does the system seem sluggish?

If you answer yes to any of these questions, you may be seeing some performance problems that can be fixed. In some cases, you have reached the limitations of your hardware and should add an additional CPU or perhaps some more disks.

If you answer no to all these questions, your system may be running very well. However, you should still periodically monitor the performance of the system to avoid future problems.

2. Determine the Problem

Once you have decided that there is some sort of problem in the system, make a careful analysis to determine the cause of the problem.

At first glance, it is not always obvious what the problem is. You may see excessive disk I/O on a certain table that may lead you to believe you have an I/O problem when, in fact, you have a cache-hit problem or an index problem that is causing excessive I/Os.

Take a few minutes to think about the problem. Decide what area you believe the problem to be in. Thinking through the problem can help you avoid a long trial-and-error period in which you randomly try different things without positive results.

At this stage, note-taking is a must. By writing down your ideas and thoughts, you can better organize them. If you have to go back and rethink the problem, your notes will be very useful.


Response Time Example:

The user response time is measured from when the data is input and submitted until the results are displayed on the user’s display device. To analyze a response time problem, start by considering the components.

The problem can be broken into three major areas:

• Client. The problem could be with the client application code.
• Server. The problem could be with the database engine.
• Network. Some problem in the network could be delaying processing.

You can use analysis tools to help determine the area to which you can isolate the response time problem.

By putting debug statements into the application code, you can determine whether the delays are part of the client component. By using the EXPLAIN PLAN command, you may see that you are not taking advantage of an index and are causing excessive table scans.

We have heard of several examples in client/server systems in which the end user complains about the performance of the server: it is causing 10-second or greater response times. After analysis, it was determined that the database was giving sub second response times but was spending over 9 seconds in the GUI (Graphical User Interface).

Be open minded and look at every component of the system as a potential for bottlenecks. You may be surprised to find that the problem is something simple and in an area you would not normally think would cause problems.

Here are some questions to ask yourself when trying to determine the cause of the performance problem:

• What are the effects of the problem: response times? Throughput ? what?
• Does every user see the same problem or does the problem affect just a few users? If so,
what are those few users doing differently?
• What does the system monitor tell you? Are you 100-percent CPU bound? What about
I/Os: are you exceeding I/O limits?
• Is only a certain type of transaction a problem? What about other transaction types? Are
they okay?
• Is it a client problem or a server problem?

By asking yourself these questions and making up your implementation-specific questions, you will get more insight into how the application operates and where the potential bottlenecks are.


3. Determine the Solution and Set Goals:

Regardless of the type of problem you have, you must determine a course of action. In many cases, determining the problem has already pointed you to a solution. In other cases, more analysis is necessary.

Once the problem has been determined, you must set some goals about what you want the solution to accomplish. When you set specific goals, not only can you measure success, but specific goals may sometimes point you to specific solutions.

The solution may consist of changing a parameter in the OS or in Oracle; the solution may consist of a rewrite of a section of application code. With packaged applications, you may have to tune the OS and database to overcome a problem in the application.

The determination of a solution should include some expectation of what results you expect to see. Changing a parameter without at least knowing what it is supposed to do may provide the desired results or may instead mask the true problem.


4. Goal Setting:

Once you determine the problem (or at least have a pretty good idea about the area causing the problem), you should set some goals. Your ultimate goal is always optimal performance. Your goals should be specific and achievable.


Here are some examples of realistic and achievable goals:

• Increase the cache-hit ratio in the buffer cache.
• Reduce the number of I/Os.
• Increase the number of connections supported.

Goals such as these may or may not be achievable in your system but at least they give you an area on which to focus your tuning efforts. Unachievable goals always end in disappointment. Although we all have the goal to have the best-performing system possible, smaller, more specific goals can help you achieve the larger goal more effectively.

Here are some questions to ask yourself when you are determining the solution and setting result goals:

• What will this solution do? What do you expect the result of your changes to be?
• How will your changes affect performance? For example, does a better cache-hit rate means less I/Os?
• Will this solution directly or indirectly affect the problem?
• How will your idea of the problem change if this solution doesn’t fix the problem?

These are just a few examples of the kind of things to think about when determining the solution and setting goals.

5. Test the Solution:

Once you put the solution in place, you must test the solution to see whether you have achieved the desired results. Although you may be able to test the results in a test configuration by setting up an emulated load, you may have no choice but to put the system into the production environment for analysis.

During the testing stage, it is also very important to take notes. These notes can be extremely valuable in the future when similar problems arise and you are looking for solutions.
Note: It is not recommended puttig test configurations into production unless you completely understand the changes that have been made and have assessed the risks.








6. Analyze the Results:

The final step is to analyze the results of any changes you have made. If the desired results have not been achieved, go back to the analysis stage to determine whether more tuning is necessary.

You should always ask the question why? It is possible that the problem has been incorrectly identified or that you have not chosen the correct solution for that problem.

Here are some questions to ask yourself during the analysis phase:

• Did the change have the desired results?
• Did the overall system performance change?
• Do you now think that the problem is different from what you first thought?
• Do you think more changes are necessary?

These are some examples of the types of questions you should ask yourself to determine whether the problem has been solved—or even affected—by the changes. You may have to go back and reanalyze the problem and try different things. Maybe you did not understand the problem correctly and have to add some additional debug code to the application to track down the problem.

Tuning SQL Statements:

SQL Code Tuning

What is SQL code?

SQL code is the code directly accessing the database, embedded either in
applications or in stored procedures. Sometimes generic SQL code is used,
which is SQL code generated by an application on an ad hoc basis. Generic
SQL code can cause serious performance issues.

What causes SQL code performance problems, and what is SQL code
tuning?

Performance is served most effectively when developers and database administrators work together to a certain extent. Poorly written SQL code is often the biggest culprit of performance problems, because it is expensive to rectify, but it is cheaper than changing
the data model. SQL code tends to be contained inside independent blocks within applications or stored procedures. This containment is commonly known as embedded SQL code.



Tuning SQL code is in general a two-step process, as follows:

1. Isolation and recoding of the worst-performing SQL statements,
perhaps the slowest-performing 10% of SQL code.

2. General tuning of SQL code involving changes to SQL statements throughout applications and the database, plus adjustments to alternate (secondary) indexing. Alternate indexing is not specifically part of the steps of normalization and denormalization but can be designated as data modeling or SQL code tuning.
It is important that database administrators have some involvement with respect to alternate indexing, at the very least in an advisory capacity. Too many or inappropriately constructed alternate indexes can completely destroy performance.

What are the benefits of SQL code tuning?

SQL code tuning can increase performance between 25% and 100%, sometimes much more. In rare situations, we have seen enormous performance improvements when tuning SQL code.SQL code is often embedded in applications in such a way that changing the SQL code does not affect application functionality.

What we need to examine in order to tune Oracle installations:

 A tuning environment

 Good software tools
 Skilled personnel
 Staging (testing) environments
 A realistic duplication of the production environment:
Actual and expected production environments.These can often be different if growth is rapid or requirements change.
 If possible, databases should be of the same size and content. If this is
impractical, then at least development and testing databases should be proportionate to production.
 Are the statistics the same? Statistics can be copied or executed using
the same time intervals as production.

Note: The DBMS_STATS package can be used to copy statistics between
databases.
 When to tune

It is best to tune during development, particularly data models and SQL code. When tuning after development in a post development requirements change phase, or in production, tuning should be done carefully, especially when it comes to changing application code.
If tuning cannot be done fully in the development cycle, which it probably cannot, take the following approach when tuning in production:

 Set performance targets.
 Use test environments.
 Use test environments that match production as closely as possible.
 Tune with care.

 What to tune

There are five general stages in tuning an Oracle production database:

1. Resolve obvious bottlenecks.
2. Examine basic configuration. Configuration is often inappropriate
as a result of initial Oracle installation and database creation.
3. Physical space can be wasted. Often a database can be better organized and become much smaller, even as much as one-tenth of current size, and sometimes more.
4. Poorly written SQL code in both applications and in the database
can only be counteracted partially by physical and configuration
tuning. SQL code tuning can help performance immensely but should be performed in development and testing
environments first.
5. Data model tuning may be required if SQL code tuning does not
resolve production performance problems. As with SQL code
tuning, data model tuning should be done in development and
testing environments first.

The easiest approach to tuning during production is Oracle Database
physical and configuration tuning. In general, tuning the data model is the
most expensive because it will require changes to SQL code, application
code and the production database.

 When to stop tuning

When to stop tuning depends on your situation and the skills you have.
If the company and your database size and activity grow, you will have performance problems anyway, but you can be better prepared. You could stop tuning when performance targets are met, depending on what needs to be tuned.

Note: Stop tuning when the bottleneck is removed.







 Tuning from development through to production

Tuning an Oracle database is not just tuning the database. As we can see
from the stages of tuning already discussed, tuning the database also
includes tuning the data model, SQL code, and thus applications. Tuning
Oracle Database is a combination of tuning both the Oracle database server and the applications accessing that database. There is a process of tuning an Oracle Database environment including a large number of separate steps or phases. These phases are repetitive but should be performed as closely as possible to the sequential order as shown in Figure

The Steps in Tuning

Tuning is a set of distinct steps, which can be repeated in any order but are preferably completed in the order shown in Figure The order and existence of these steps could vary depending on the application type, skills resources, time for development, and capacity of available hardware.


The steps in tuning an Oracle installation should more or less follow the
same path as in the development cycle of software development, namely
analyze, design, build, test, implement, and verify.

SQL Code Tuning

SQL coding requirements should fit the specifications of the data model
based on entities, relationships, and alternate indexing. SQL code can be in
both database-based PL/SQL coding and applications-embedded SQL
code.

What are the steps in tuning SQL code?

 Identify the worst-performing SQL code and tune only those SQL
statements.

 When the worst-performing code is tuned, SQL code can be tuned in
general if there is still a problem.

 Create, remove, and tune alternate indexing to suit SQL code performance
requirements without jeopardizing the data model. When matching SQL code to indexing, it is best to attempt to map SQL code filtering, sorting, and joining to primary and foreign index structures (referential integrity). Using already existing referential integrity keys will reduce the number of alternate indexes. More
indexes on tables will speed up data selection but will slow down data
updates, sometimes drastically. The fact is that if SQL code is not utilizing
referential integrity indexing, there may be a mismatch
between the data model and application requirements, or the data
model is simply inappropriate.

How are individual SQL code statements tuned in Oracle Database?

 Often the most effective method is to examine the SQL code by
hand and make sure that the SQL statements conform to indexing
in the database, namely in selecting, filtering, sorting, and grouping
clauses. The more complex SQL code is, the more difficult it
will be to tune in this manner. In fact, complex SQL code is
sometimes beyond the capabilities of the optimizer.
 Use the Oracle Database EXPLAIN PLAN command to examine
the optimizer’s best execution path for SQL code. EXPLAIN
PLAN will show where potential improvement can be made. A
simple query plan is shown in Figure




1.1 The SELECT Statement

It is always faster to SELECT exact column names. Thus, using the
Employees schema:

SELECT division_id, name, city, state, country FROM division;
Is faster than:

SELECT * FROM division;

Also, since there is a primary key index on the Division table:
SELECT division_id FROM division;

This query will only read the index file and should completely ignore
the table. Because the index contains only a single column and the table
contains five columns, reading the index is faster because there is less physical
space to traverse.

In order to prove these points, we need to use the EXPLAIN PLAN
command. Oracle Database’s EXPLAIN PLAN command allows a quick
peek into how the Oracle Database optimizer will execute a SQL statement,
displaying a query plan devised by the optimizer.

The EXPLAIN PLAN command creates entries in the PLAN_TABLE
for a SELECT statement. The resulting query plan for the SELECT statement
following is shown after it. Various versions of the query used to
retrieve rows from the PLAN_TABLE.In order to use the EXPLAIN PLAN command, statistics must be generated. Both the EXPLAIN PLAN command and statistics will
be covered in detail later:

EXPLAIN PLAN SET statement_id= 'TEST' FOR SELECT * FROM
division;



Query Cost Rows Bytes
SELECT STATEMENT on 1 10 460
TABLE ACCESS FULL on DIVISION 1 10 460


One thing that is important to remember about the EXPLAIN PLAN
command is that it produces a listed sequence of events, a query plan.
Examine the following query and its query plan. The Pos or positional column
gives a rough guide to the sequence of events that the optimizer will
follow. In general, events will occur listed in the query plan from bottom to
top, where additional indenting denotes containment. In other words,
examining the following example output, the step SELECT STATEMENT
on occurs last:

EXPLAIN PLAN SET statement_id= 'TEST' FOR
SELECT di.name, de.name, prj.name, SUM(prj.budget-prj.cost)
FROM division di JOIN department de USING(division_id)
JOIN project prj USING(department_id)
GROUP BY di.name, de.name, prj.name
HAVING SUM(prj.budget-prj.cost) > 0;

Query Pos Cost Rows Bytes
SELECT STATEMENT on 97 97 250 17500
FILTER on 1
SORT GROUP BY on 1 97 250 17500
HASH JOIN on 1 24 10000 700000
TABLE ACCESS FULL on DIVISION 1 1 10 170
HASH JOIN on 2 3 100
3600
TABLE ACCESS FULL on DEPARTMENT 1 1 100
1900
TABLE ACCESS FULL on PROJECT 2 13 10000
340000

Now let’s use the Accounts schema, which has some very large tables.
Large tables show differences between the costs of data retrievals more easily.
The GeneralLedger table contains more than 700,000 rows at this point
in time.

In the next example, we explicitly retrieve all columns from the table
using column names, similar to using SELECT * FROM GeneralLedger.
Using the asterisk probably involves a small overhead in reinterpretation
into a list of all column names, but this is internal to Oracle Database, and
unless there are a huge number of these types of queries, this is probably
negligible:

EXPLAIN PLAN SET statement_id='TEST' FOR
SELECT generalledger_id,coa#,dr,cr,dte FROM generalledger;


The cost of retrieving 752,740 rows is 493, and the GeneralLedger table
is read in its entirety, indicated by TABLE ACCESS FULL:

Query Cost Rows Bytes
SELECT STATEMENT on 493 752740 19571240
TABLE ACCESS FULL on GENERALLEDGER 493 752740 19571240

Now we will retrieve only the primary key column from the GeneralLedger
table:

EXPLAIN PLAN SET statement_id='TEST' FOR
SELECT generalledger_id FROM generalledger;

For the same number of rows, the cost is reduced by about half, to 217,
because the byte value is reduced, reading the index only, using a form of a
full index scan. This means that only the primary key index, not the table,
is being read:

Query Cost Rows Bytes
SELECT STATEMENT on 217 752740 4516440
INDEX FAST FULL SCAN on XPKGENERALLEDGER 217 752740 4516440



1. What Is a Well-Tuned SQL Statement?

The key to having an optimized application is at the heart of the application:

The SQL statements themselves. Optimizing your SQL statements and reducing unnecessary overhead will result in an optimized application.

An optimized application together with the tuned and optimized RDBMS server will provide a well-balanced, highly tuned system. Because users are mainly interested in response times, having both a well-tuned application and an optimized server is essential. To get an optimized application, you must start with an optimized SQL statement.
So, what is a well-tuned, optimized SQL statement?

Here’s a list of some of the characteristics of a well-tuned SQL statement:

• Makes efficient use of RDBMS features. The well-tuned SQL statement uses indexes or hashing as available. If possible, the application should also take advantage of features such as array processing and discrete transactions.
• Uses PL/SQL to improve performance. PL/SQL allows blocks of statements to be sent to the Oracle server at one time. If you don’t use PL/SQL, you must send each statement individually.
• Uses stored procedures. By using stored procedures, you reduce the amount of data that must be sent across the network and increase the chance that the statement may already be parsed in the shared SQL area.
• Uses packages. Packages increase performance because the entire package is loaded when the package is called for the first time.
• Uses cached sequences to generate primary key values. This improves the performance of key generation and makes it unnecessary to generate the key in the application.
• Makes efficient use of space. The SQL statement uses the VARCHAR2 data type instead of CHAR, when possible, to avoid unnecessary blank padding.
• Uses hints where necessary. A well-tuned SQL statement uses hints where appropriate to allow the programmer’s understanding of the SQL statement and the database design to override Oracle’s choice of optimization method.

2. How to Identify Badly Formed SQL Statements

Badly tuned SQL statements tend to access the database in a very inefficient way, causing unnecessary amounts of data to be scanned and transferred across the network. Badly tuned statements can cause a well-tuned server to expend large amounts of unnecessary processing power and I/O resources.

You can identify badly tuned SQL statements with the Oracle EXPLAIN PLAN command and SQL Trace facility, “Using EXPLAIN PLAN and SQL Trace.”

Some of the attributes of a badly tuned SQL statement are listed here:

• Indexes are not used. If a query is not properly formed, you may bypass an index that could be used to reduce I/O and CPU processing.
• Hashing is bypassed. If a hashed cluster is improperly accessed, performance could be
severely degraded.
• Unnecessary table scans are performed. If the SQL statement is improperly formed, you may be doing unnecessary table scans.
• Unnecessary amounts of data are returned. This is an undue burden not only on the network but on the application as well.

These attributes should alert you to the fact that the SQL statements are not optimally tuned for the task being done. If your SQL statement exhibits any of these characteristics, you should make some correction to the statement to correct these problems.



2. SQL Statement Processing

By understanding how Oracle processes SQL statements, you can have a better understanding of how to optimize these statements. The following sections look at the SQL statement parsing process and how an execution plan is formed.

For each SQL statement that is executed, several steps occur

1. A cursor is created.
2. The statement is parsed, if it is not already in the shared pool.
3. Any query in the statement is processed.
4. Variables are bound.
5. The statement is executed.
6. If possible, the statement is parallelized.
7. Rows to be returned are fetched.

Use EXPLAIN PLAN To Analyze Statement Execution

By running EXPLAIN PLAN as part of the SQL Trace report, you can get a better idea of how the SQL statement is actually being executed by Oracle. This information (and the information supplied by SQL Trace) helps you judge the efficiency of the SQL statement.

Here is a list of some of the things to look for:

• Are the table’s indexes being used when they should be? If not, the statement may not be supplying the correct parameters in the WHERE clause.
• Are indexes being used when they should not be? In cases when you are selecting too much data, you may want to use the FULL hint to bypass the index.
• What is the cost of the SQL statement? This value is given in the position column of the first row of the table returned by EXPLAIN PLAN.
• What is the amount of overhead incurred from SELECT or UPDATE operations?
• Is the statement being parallelized? You may have to provide a hint to effectively take advantage of the Parallel Query option.

SQL Tuning by using Statspack Report

This article provides guidance on how to identify the most resource intensive SQL statements in a database for tuning purposes.

The techniques described here can be used when initial diagnosis of a database performance problem suggests that further investigation needs to concentrate on finding and tuning the most resource intensive SQL statements according to specific criteria, e.g.


• using the most CPU
• performing the most disk I/O operations
• having the most executions
• taking the longest time to execute (elapsed time)

When to Look For Resource Intensive SQL Statements

In this section we discuss briefly the reasons that would lead us to look for the most resource intensive SQL statements while investigating a database performance problem.

• Response Time Analysis shows that heaviest time consumer is CPU-related e.g. CPU Other or CPU Parse time or an I/O-related Wait Event e.g. db file sequential read or db file scattered read:

This is the method followed by tools such as Statspack.
Extensive information is gathered from the database which shows in detail in what type of activities time is being spent.
It is based on the following equation:
Response Time = Service Time + Wait Time
where Service Time is time spent on the CPU
and Wait Time is the sum of time spent on Wait Events i.e. non-idle time spent waiting for an event to complete or for a resource to become available.

Service Time is comprised of time spent on the CPU for Parsing, Recursive CPU usage (for PLSQL and recursive SQL) and CPU used for execution of SQL statements (CPU Other).
Service Time = CPU Parse + CPU Recursive + CPU Other
The above components of Service Time can be found from the following statistics:
• Service Time from CPU used by this session
• CPU Parse from parse time cpu
• CPU Recursive from recursive cpu usage
From these, CPU Other can be calculated as follows:
CPU other = CPU used by this session - parse time cpu - recursive cpu usage
When CPU Other is a significant component of total Response Time the next step is to find the SQL statements that access the most blocks. Block accesses are also known as Buffer Gets and Logical I/Os.

Statspack lists such SQL statements in section SQL ordered by Gets.

When CPU Parse is a significant component of total Response Time the next step is to find the SQL statements that have the most parses.

Statspack lists such SQL statements in section SQL ordered by Parse Calls.

Note: this is available in Oracle9i and above.

Wait Time is the sum of time waited for non-idle Wait Events. These include I/O waits for reading blocks from disk as measured by the Wait Events db file sequential read for single-block reads and db file scattered read for multi-block reads.

When such Wait Events are found to be significant components of Response Time the next step is to find the SQL statements that read the most blocks from disk.

Statspack lists such SQL statements in section SQL ordered by Reads.

Example from Statspack: (pre-Oracle9i Release 2)

Here is an example where CPU Other was found to be a significant component of total Response Time:

Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
direct path read 4,232 10,827 52.01
db file scattered read 6,105 6,264 30.09
direct path write 1,992 3,268 15.70
control file parallel write 893 198 .95
db file parallel write 40 131 .63
-------------------------------------------------------------

Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
CPU used by this session 358,806 130.5 12,372.6
parse time cpu 38 0.0 1.3
recursive cpu usage 186,636 67.9 6,435.7


From these figures we can obtain:
• Wait Time = 10,827 x 100% / 52,01% = 20,817 cs
• Service Time = 358,806 cs
• Response Time = 358,806 + 20,817 = 379,623 cs
• CPU Other = 358,806 - 38 - 186,636 = 172,132 cs
If we now calculate percentages for the top Response Time components:
• CPU Other = 45.34%
• CPU Recursive = 49.16%
• direct path read = 2.85%
• etc. etc.
CPU Other is a significant component of Response Time, so a possible next step is to look at the SQL ordered by Gets section.

Example from Statspack: (Oracle9i Release 2 & above)

Starting with Oracle9i Release 2, Statspack presents Service Time (obtained from the statistic CPU used by this session ) together with the top Wait Events in a section called Top 5 Timed Events, which replaces the section Top 5 Wait Events of previous releases.

Here is an example:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
library cache lock 141 424 76.52
db file scattered read 3,367 96 17.40
CPU time 32 5.79
db file sequential read 161 1 .18
control file parallel write 245 0 .05
-------------------------------------------------------------

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session 3,211 4.3 1,605.5
parse time cpu 59 0.1 29.5
recursive cpu usage 232 0.3 116.0

These figures give us directly the percentages of the Wait Events against the total Response Time so no further calculations are necessary to assess the impact of Wait Events. Service Time is presented as CPU time in this section and corresponds to the total CPU utilisation. We can drill down to the various components of Service Time as follows:
• CPU Other = 3,211 - 59 - 232 = 2,920 cs
• CPU Other = 2,920 / 3,211 x 5.79% = 5.26%
• CPU Parse = 59 / 3,211 x 5.79% = 0.11%
• CPU Recursive = 232 / 3,211 x 5.79% = 0.42%
In this example, the main performance problem was an issue related to the Library Cache.
The second most important time consumer was waiting for physical I/O due to multiblock reads (db file scattered read).
In this case a possible approach would be to look at the SQL ordered by Reads section of Statspack.
• Operating System resource analysis shows that excessive CPU or Disk I/O usage is caused by one or more Oracle processes:

If Operating System utilities show that most CPU usage is due to a small number of Oracle processes then typically they will be SQL Traced and have TKPROF reports generated from their tracefiles. Analysis of the TKPROF reports will lead to the most time consuming SQL statements which will be the subject of subsequent tuning.

If CPU usage is excessive but spread out fairly evenly among all the Oracle processes on the system then typically a tool such as Statspack will be used to perform Response Time analysis. The components of Service Time will be evaluated and if this shows CPU Other as being significant the next step will be to look at the SQL performing most block accesses in the SQL by Gets section of the Statspack report.

If Disk I/O is highly utilized on the system then a Response Time analysis using Statspack can be done to confirm that the Disk I/O usage is due to Oracle processes and I/O-related Wait Events can be expected to be significant components of overall Response Time. SQL statements performing the most physical reads can then be found in the section SQL ordered by Reads .

• Investigation of the performance of a batch job or other session which performs a number of SQL statements and has slow performance:

This will be done with SQL Trace and TKPROF as descrived above and the most important SQL statements in terms of time spent will be identified for further tuning.

• Examination of statistics stored for SQL statements in V$ dynamic views:

Part of the information stored with a SQL statement (Shared Cursor) in the Library Cache portion of the Shared Pool are a number of statistics related to its execution. These are available through the V$SQLAREA dynamic view and can be queried to monitor the most expensive SQL statements in each category.

Top SQL Statements in Statspack Reports

Statspack reports generated from snapshots at level 5 (default) and above can contain SQL reporting sections for the following types of resource intensive SQL statements:
• SQL ordered by Gets
containing the SQL statements with most buffer accesses.

The default threshold value is 10,000 buffer gets.
Columns displayed are:
Buffer Gets, Executions, Gets per Exec, % Total, Hash Value (8.1.7)
Buffer Gets, Executions, Gets per Exec, %Total, CPU Time (s), Elapsed Time (s), Hash Value (9.0.1 & 9.2)

• SQL ordered by Reads

containing the SQL statements with most read I/Os from disk.

The default threshold value is 1,000 disk reads.
Columns displayed are:
Physical Reads, Executions, Reads per Exec, % Total, Hash Value (8.1.7)
Physical Reads, Executions, Reads per Exec, %Total, CPU Time (s), Elapsed Time (s), Hash Value (9.0.1 & 9.2)

• SQL ordered by Executions

containing the SQL statements executed the most times.

The default threshold value is 100 executions.
Columns displayed are:
Executions, Rows Processed, Rows per Exec, Hash Value (8.1.7)
Executions, Rows Processed, Rows per Exec, CPU per Exec (s), Elapsed per Exec (s), Hash Value (9.0.1 & 9.2)

• SQL ordered by Parse Calls (Oracle9i and above)

Containing the SQL statements with most soft parse calls.

The default threshold value is 1,000 parse calls.
Columns displayed are:
Parse Calls, Executions, % Total Parses, Hash Value

• SQL ordered by Sharable Memory

containing the SQL statements occupying the most Library Cache memory.

The default threshold value is 1Mb (1,048,576 bytes).
Columns displayed are:
Sharable Mem (b), Executions, % Total, Hash Value (8.1.7, 9.0.1 & 9.2)
• SQL ordered by Version Count
containing the SQL statements with most versions (children cursors).

The default threshold value is 20 versions.
Columns displayed are:
Version Count, Executions, Hash Value (8.1.7, 9.0.1 & 9.2)

In the next few sections we look at examples of each type of Statspack SQL reporting section.

Finding SQL with High CPU Other Time in Statspack

Here are a few examples of SQL ordered by Gets sections from Statspack.

SQL ordered by Gets for DB: PROD Instance: prod Snaps: 8 -9
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100



Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
91,938,671 4,249 21,637.7 24.1 3503723562
SELECT "TEKSTI_IK", "U_VERSION", "ASIAKIR_IK", "KONTAKT_IK", "L
OMAKE_IK", "PVM", "TIEDON_LKA", "TEKSTI_VER", "TEKST_TILA", "VA
LK_AUKKO", "SUOR_PA_IK", "SUOR_PA_ID", "RESURSS_IK", "RESURSS_I

39,196,483 4,257 9,207.5 10.3 576408779
SELECT "KONTAKT_IK", "ASIAKAS_IK", "KAYNTIPVM", "KLO_JNRO", "KT
_PIKASEL", "RESURSS_ID", "SUOR_PA_IK", "SUOR_PA_ID", "TEKSTI_IK
", "KT_TEKSTI", "KONT_LAJI" FROM "TEI1000_VIEW" WHERE (kontakt_i

31,870,113 4,262 7,477.7 8.3 3583640853
SELECT "LAAKE_T_IK", "U_VERSION", "TAPAHTU_IK", "ASIAKAS_IK", "
TOT_DATIM", "LAAKE_IK", "KAUPPANIMI", "VALM_MUOTO", "VAHVUUS",
"PAKK_KOKO", "ANNOS", "INDIKAATIO", "PYSYVAIS", "VOIM_OLEVA", "S

30,567,449 4,259 7,177.1 8.0 1901268379
SELECT "LAB_TUL_IK", "U_VERSION", "TAPAHTU_IK", "ASIAKAS_IK", "
TOT_DATE", "TOT_TIME", "PALVELU_IK", "PALV_LYHEN", "SL_KOODI",
"OSAVAS_NRO", "OSAVAS_HOP", "VAST_TYYPP", "VAST_ARVO", "VAST_SIJ

Here the first SQL statement (with hash value 3503723562) alone accounts for 24.1% of all buffer gets in the instance.
The next 3 statements account for 10.3%, 8.3% and 8.0%.
All 4 statements are executed approximately the same number of times (around 4,250 executions).
The first statement has more total Gets because it fetches more buffers each time it runs (Gets/Exec is 21,637.7 compared to 9,207.5, 7,477.7 and 7,177.1).
So it is a first candidate for tuning as it has greater impact on CPU other time than the other 3 statements.
A better execution plan for this statement resulting in fewer Gets/Exec will reduce its CPU consumption.

Here is another example:

Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,200,593 1 3,200,593.0 52.2 397365298
select c.contract_no||'/'||c.contract_version, c.owner_ag
ency_id, a.agency_name, TERRITORYPACK.getsalescont
act(a.agency_id,'SALES','N'), c.start_date, LEAST(

404,024 88,481 4.6 6.6 985453413
select cv_dist_flag from applicant
where applicant_id = :p1

178,600 3,028 59.0 2.9 3013728279
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=


The first statement (hash value 397365298) generated 52.2% of the buffer gets in the instance with just 1 execution.
It has a high number of Gets/Exec 3,200,593.0 when compared to the others.
If this statement is not just a one-time query then it is a good candidate for tuning before it starts getting used more often.

The second statement (hash value 985453413) fetches on average less than 5 buffers per execution but appears high in the list because it is executed very frequently.
Initially it is not significant enough to warrant further investigation.
If after tuning the first statement, CPU Other is still a significant component of overall Response Time and a new Statspack report shows the second statement still high on the list, then it could be looked at more closely.

Here is a similar case, this time it is from Oracle9i and we can see the new CPU & Elapsed Time columns:
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
16,177,286 1 16,177,286.0 12.1 209.00 297.91 342588312
select sf.satz_id, f1.feld, f2.feld feld20, substr(sf.fehler
,instr(sf.fehler,'geschrieben:')+13) feld30 --merkwürdigerweise
wird ab 31 Byte in eine neue Zeile geschrieben from d98_ditr_s

8,177,886 375,622 21.8 6.1 214.09 302.26 3544921891
SELECT /*+ RULE */ * from d98_schemaeintraege_view where pro
f_id = :b1 order by sortierung

The Statspack report does not always show the full text of the SQL statement. The Hash Value can be used to get this using the following query, provided the SQL statement is still in the Library Cache at the time the query is run:

SELECT sql_text
FROM v$sql_text
WHERE hash_value = '&hash_value_of_SQL'
ORDER BY piece;





Finding SQL Statements with High CPU Parse Time in Statspack

If CPU Parse time is a significant component of Response Time, it can be because cursors are repeatedly opened and closed every time they are executed instead of being opened once, kept open for multiple executions and only closed when they are no longer required.

The SQL ordered by Parse Calls can help find such cursors, here is an example:

SQL ordered by Parse Calls for DB: DWI1 Instance: DWI1 Snaps: 1 -4
-> End Parse Calls Threshold: 1000

% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
13,632,745 13,632,745 98.90 3980186470
SELECT distinct TS002.C_JOB_DEP, TS002.C_JOB FROM TS002_JO
B_DEP TS002, TS001_JOB TS001 WHERE TS001.C_JOB = TS002.C_JO
B_DEP AND TS002.C_JOB = :b1 AND TS001.C_TIP_JOB !=

11,701 27,255,840 0.08 3615375148
COMMIT

8,192 8,192 0.06 238087931
select t.schema, t.name, t.flags, q.name from system.aq$_queue_t
ables t, sys.aq$_queue_table_affinities aft, system.aq$_que
ues q where aft.table_objno = t.objno and aft.owner_instance = :
1 and q.table_objno = t.objno and q.usage = 0 and b
itand(t.flags, 4+16+32+64+128+256) = 0 for update of t.name, aft

8,192 8,192 0.06 2780709284
select q_name, state, delay, expiration, rowid, msgid, dequeue
_msgid, chain_no, local_order_no, enq_time, enq_tid, step_no,
priority, exception_qschema, exception_queue, retry_count, corri
d, time_manager_info, sender_name, sender_address, sender_prot
ocol from SYS.AQ_SRVNTFN_TABLE where time_manager_info <= :1

The first SQL statement (hash value 3980186470) has had the most parses issued against it (98.90% of all parses in the instance). It is parsed every time it is executed (Parse Calls = Executions). Due to its frequency it is a prime candidate for reducing parse calls as described above.

Note: in excessive parsing situations, it is likely that there will be symptoms such as latch free waits on the Library Cache latches and possibly the Shared Pool latch, in addition to CPU Parse time.

Finding SQL Statements with High Disk I/O Waits in Statspack

Identifying SQL statements responsible for most physical reads from the Statspack section SQL ordered by Reads has similar concepts as for SQL ordered by Gets.
% Total can be used to evaluate the impact of each statement.
Reads per Exec together with Executions can be used as a hint of whether the statement has a suboptimal execution plan causing many physical reads or if it is there simply because it is executed often.

Possible reasons for high Reads per Exec are use of unselective indexes require large numbers of blocks to be fetched where such blocks are not cached well in the buffer cache, index fragmentation, large Clustering Factor in index etc.

Here is an example:

SQL ordered by Reads for DB: PROD Instance: prod Snaps: 14 -16
-> End Disk Reads Threshold: 1000

Physical Reads Executions Reads per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
3,633,017 48 75,687.9 28.0 3954888792
SELECT "VAR_RES_IK", "RESURSS_IK", "PVM", "ALKAEN_KLO", "PAATT_
KLO", "AJ_STATUS", "ILMOITT", "HETU", "AS_NIMI", "KASITELTY", "T
OIMIPI_IK", "SUOR_PA_IK", "SUOR_PA_ID", "AIKATYY_IK", "AIKATYY_I

1,511,867 26 58,148.7 11.6 394819206
SELECT "VAR_RES_IK", "RESURSS_IK", "PVM", "ALKAEN_KLO", "PAATT_
KLO", "AJ_STATUS", "ILMOITT", "HETU", "AS_NIMI", "KASITELTY", "T
OIMIPI_IK", "SUOR_PA_IK", "SUOR_PA_ID", "AIKATYY_IK", "AIKATYY_I

762,101 6 127,016.8 5.9 4274178025
SELECT "LAB_TUL_IK", "PALV_LYHEN", "PALV_TILA", "OSAVAS_HOP", "
VAST_ARVO", "VAST_SIJ", "MITTAYKS", "POIKKEAVA", "PALVELU_IK", "
ASIAKAS_IK", "VERIRYHMA", "VV_MAARPVM", "TOT_DATE", "TOT_TIME",

512,142 3 170,714.0 3.9 1591034069
SELECT "LAB_TUL_IK", "PALV_LYHEN", "PALV_TILA", "OSAVAS_HOP", "
VAST_ARVO", "VAST_SIJ", "MITTAYKS", "POIKKEAVA", "PALVELU_IK", "
ASIAKAS_IK", "VERIRYHMA", "VV_MAARPVM", "TOT_DATE", "TOT_TIME",

The first two SQL statements are both executed more often than the others and cause more blocks to be read in from disk each time.
Together they account for almost 40% of read I/O. They both are prime candicates for further SQL tuning.


Evaluating SQL Statements with Most Executions in Statspack

Identifying those SQL statements that execute most often in a database and tuning them can improve performance even when such statements do not consume many resources in each execution.

This is because of two reasons:

1. The overall resource consumption of these statements across all their executions may be significant.
2. Such frequently executed statements are often part of OLTP-style short transactions. Tuning them can improve the performance of the database as experienced by users entering such transactions into the system.

Here is an example of Statspack SQL ordered by Executions:

QL ordered by Executions for DB: DIN Instance: DIN Snaps: 263 -264
-> End Executions Threshold: 100

CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
404,871 133,781 0.3 0.00 0.00 3592950473
SELECT nvl(NVL(EIGTWX.WERT,EIGTW.WERT),eig.wert) wert,
1 sortier FROM D8_EIGENTYPWERTE_TEXT EIGTWX ,D98_
EIGENTYPWERTE EIGTW ,D98_EIGENSCHAFTEN EIG WHERE EIG.ANH

324,014 324,014 1.0 0.00 0.00 293077222
SELECT /*+ INDEX (D98_SCHEMAFORMATE SCHFMT_FORM_FKT_UNTER) */
upper(funktionsname), unterformat FROM
D98_SCHEMAFORMATE WHERE formatierung = :b1

183,276 183,276 1.0 0.00 0.00 66213032
INSERT INTO D98_Batch_Variablenwerte (ausf_id, b
atch_id, var_nr, wert) VALUES (:b4,
:b3, :b2, :b1)

114,224 8,936 0.1 0.00 0.00 1604001664
SELECT termin_ist FROM d98_termine WHERE a
nhang_id=:b2 AND terminart = :b1 AND aktiv = 'J'
order by termin_ist desc


It will frequently be the case that the timing columns in this section will show 0.00 for CPU and Elapsed time, as the most frequently executing SQL statements are likely to be quite fast.

Finding SQL Statements with High Shared Pool Consumption in Statspack

This can help with Shared Pool and Library Cache/Shared Pool latch tuning.

Statements with many versions (multiple child cursors with the same parent cursor i.e. identical SQL text but different properties such as owning schema of objects, optimizer session settings, types & lengths of bind variables etc.) are unsharable.

This means they can consume excessive memory resources in the Shared Pool and cause performance problems related to parsing e.g. Library Cache and Shared Pool latch contention or lookup time e.g. Library Cache latch contention.

Statspack has 2 sections to help find such unsharable statements, SQL ordered by Sharable Memory and SQL ordered by Version Count:

SQL ordered by Sharable Memory for DB: DIN Instance: DIN Snaps: 263 -264
-> End Sharable Memory Threshold: 1048576

Sharable Mem (b) Executions % Total Hash Value
---------------- ------------ ------- ------------
3,445,680 1 0.1 2317124142
select /*+rule*/ decode(UPPER(:P_TITEL_SORT), 'NAME', fin_sort_ansp_name,
'FIRMA', fin_sort_ans_firma, 'KURZBEZ. FIRMA', fin_sort_ans_name,

3,386,429 76 0.1 3470710979
SELECT ID,ANHANGSTYP,OBJEKT_ID,PROF_ID,ANHANGSART,SPRACHE,AKTIV,
ANFANG,ENDE,SEITEN,ARCH_BEMERKUNG,DESKRIPTOR,EIGTW_ID,EIG_WERT,T

2,836,746 447 0.1 2274525714
SELECT ID,ANHANGSTYP,OBJEKT_ID,PROF_ID,ANHANGSART,SPRACHE,AKTIV,
ANFANG,ENDE,SEITEN,ARCH_BEMERKUNG,DESKRIPTOR,EIGTW_ID,EIG_WERT,T


SQL ordered by Version Count for DB: P97 Instance: P97 Snaps: 177 -180
-> End Version Count Threshold: 20

Version
Count Executions Hash Value
-------- ------------ ------------
26 36,228 3957083415
SELECT 1 FROM NK245_RECHTE A WHERE (NK245_NK211_ID BETWEEN :b
1 AND :b2 ) AND NK245_GEHEIMSTUFE >= :b3 AND NK245_NK209_ID = :
b4 AND NK245_NKXX_ID = :b5 AND NK245_ANZAHL > 0 AND (:b6 BETW

25 606 2916558383
UPDATE KVS.NK21_DOKUMENTE SET NK21_NK41_PDA_KUERZEL=:b1,NK21_NK4
1_PDA_NAME=:b2,NK21_STR_LEVEL=:b3,NK21_STR_NR=:b4,NK21_STR_TYP=:
b5,NK21_STRL1_NR=:b6,NK21_STRL1_TYP=:b7,NK21_STRL2_NR=:b8,NK21_S

24 1,602 1157590177
INSERT INTO NK297_NACHRICHTEN ( NK297_ID,NK297_TIMESTAMP,NK297_T
IMESTAMP_E,NK297_NK210_ABSENDER,NK297_NK270_USERNAME_ABS,NK297_T
ITEL,NK297_NTEXT,NK297_NK248_ID,NK297_NK244_ID,NK297_NK213_ID,NK

Top SQL Statements in TKPROF Reports

TKPROF is a tool for producing formatted reports from SQL Trace files (also known as event 10046 tracefiles).

Each SQL (and PL/SQL) statement appearing in the tracefile has its information summarized and collected in one place in the TKPROF report.
This information includes: number of parse, execution & fetch operations, total cpu & elapsed times, buffers read in consistent (query) and current mode, blocks read from disk and row counts, Row Source operations, execution plans, library cache misses, parsing user id & optimizer mode and with TKPROF version 9.0.1 and above, summary of wait events for tracefiles generated with event 10046 at levels 8 or 12.

A powerful feature available in TKPROF is the sort option which allows for ordering the SQL statement in the report according to a number of criteria.

This enables the easy identification of the most resource-intensive SQL statements and helps target efficiently the SQL tuning process.


Top SQL Statements in V$SQLAREA and V$SQL

The Oracle Server provides 3 dynamic views for querying execution statistics of all SQL statements currently cached in the Library Cache of the Shared Pool.

They are V$SQL, V$SQLAREA and V$SQLXS.

V$SQL has 1 row for each different version of a SQL statement.
This means that each child cursor has its own execution sttistics.
@ It queries X$KGLCURSOR without a GROUP BY.

V$SQLAREA has 1 row for each different SQL string i.e. each parent cursor.
This means that the statistics for all child cursors i.e. different versions of this cursor are grouped together. It is not based on V$SQL.

@ Instead it also queries X$KGLCURSOR with a GROUP BY .

V$SQLXS is a simpler version of V$SQLAREA.
It is used mainly by Statspack for generating the SQL reporting sections.
It queries V$SQL with a GROUP BY.
It is defined in ORACLE_HOME/rdbms/admin/catsnmp.sql.

V$SQLAREA or V$SQLXS can be used most often to find the top few SQL statements for a specific category of statistic.
Once these are identified, V$SQL can be used to drill down to see whether different versions of each statement exhibit similar statistics or whether some particular versions stand out.

V$SQL is less resource intensive than V$SQLAREA as it avoids the GROUP BY operation and causes less Library Cache latch contention.

Here is a general form of a query on any of these views:

SELECT * FROM
(SELECT hash_value,address,substr(sql_text,1,40) sql,
[list of columns], [list of derived values]
FROM [V$SQL or V$SQLXS or V$SQLAREA]
WHERE [list of threshold conditions for columns]
ORDER BY [list of ordering columns] DESC)
WHERE rownum <= [number of top SQL statements];

Here is an example:

SELECT * FROM
(SELECT hash_value, address, substr (sql_text, 1,40) sql,
buffer_gets, executions, buffer_gets/executions "Gets/Exec"
FROM V$SQLAREA
WHERE buffer_gets > 100000 AND executions > 10
ORDER BY buffer_gets DESC)
WHERE rownum <= 10;





• [list of columns] = buffer_gets, executions
• [list of derived values] = buffer_gets/executions
• [list of threshold conditions for columns] = buffer_gets > 100000, executions > 10
• [list of ordering columns] = buffer_gets
• [number of top SQL statements] = 10
The following article contains a selection of ready to run Top-10 queries:

A list of columns on which to base similar queries are:
• buffer_gets
• disk_reads
• executions
• rows_processed
• sorts
• parse_calls
• sharable_mem
• version_count
• invalidations

All of these are available in all three views V$SQL, V$SQLAREA & V$SQLXS.
There are a number of other columns for queries of this kind , not so frequently used, which can be found by inspecting V$SQL and V$SQLAREA.


Top SQL Statements in Enterprise Manager SQL Analyze

SQL Analyze is a GUI tool available with the Oracle Enterprise Manager Tuning Pack that can be used to identify and can help with tuning resource intensive SQL statements.

The queries for the heaviest SQL statements are done from the Top SQL menu and a variety of search criteria are available:
• Buffer Cache Hit Ratio
• Buffer Gets
• Buffer Gets per Execution
• Buffer Gets per Row
• Disk Reads
• Disk Reads per Execution
• Executions
• Parse Calls
• Parse Calls per Execution
• Rows Processed
• Sharable Memory
• Sorts