BC remote Oracle DBA - Call (800) 766-1884  
Oracle Consulting Oracle Training Development

Remote DBA

Remote DBA Plans  

Remote DBA Service

Remote DBA RAC

   
Remote DBA Oracle Home
Remote DBA Oracle Training
Remote DBA SQL Tuning Consulting
Remote DBA Oracle Tuning Consulting
Remote DBA Data Warehouse Consulting
Remote DBA Oracle Project Management
Remote DBA Oracle Security Assessment
Remote DBA Unix Consulting
Burleson Books
Burleson Articles
Burleson Web Courses
Burleson Qualifications
Oracle Links
Remote DBA Oracle Monitoring
Remote DBA Support Benefits
Remote DBA Plans & Prices
Our Automation Strategy
What We Monitor
Oracle Apps Support
Print Our Brochure
Contact Us (e-mail)
Oracle Job Opportunities
Oracle Consulting Prices





   

 

 

 

Remote DBA services

Remote DBA Support

Remote DBA RAC

Remote DBA Reasons

Remote Oracle Tuning

Remote DBA Links

Oracle DBA Support

Oracle DBA Forum

Oracle Disaster

Oracle Training

Oracle Tuning

Oracle Training

 Remote DBA SQL Server

Remote MSSQL Consulting

Oracle DBA Hosting

Oracle License Negotiation

 

 


 

 

 

 

 

 

 

Identifying High-Impact SQL in the Library Cache

Oracle Tips by Burleson Consulting

We begin our investigation into Oracle SQL tuning by viewing the SQL that currently resides inside our library cache. Many people ask where they should start when tuning Oracle SQL. Tuning Oracle SQL is like a fishing expedition; you must first “fish” in the Oracle library cache to extract SQL statements, and then rank the statements by their amount of activity.

Oracle makes it quite easy to locate frequently executed SQL statements. The SQL statements in the v$sqlarea view are rank ordered by several values:

  • rows_processed  Queries that process a large number of rows will have high I/O and may also have an impact on the TEMP tablespace.

  • buffer_gets  High buffer gets may indicate a resource-intensive query.

  • disk_reads  High disk reads indicate a query that is causing excessive I/O.

  • sorts  Sorts can be a huge slowdown, especially if the sorts are being done on disk in the TEMP tablespace.

  • executions  The more frequently executed SQL statements should be tuned first, since they will have the greatest impact on overall performance.

You can get lists of SQL statements from the v$sqlarea view or stats$sql_summary table in descending order on any of these variables.

The executions column of the v$sqlarea view and the stats$sql_summary table can be used to locate the most frequently used SQL. When fishing for SQL, you can use a tool to display the SQL in the library cache. The next section will cover two ways to extract high-impact SQL:

  • Extract SQL from stats$sql_summary with a STATSPACK SQL top-10 report.

  • Extract and analyze SQL from the library cache using access.sql.

Please note that either of these techniques can be used with either the historical STATSPACK sql_summary table or with the v$sqlarea view. The columns are identical.

A SQL Top-10 Report

What follows is an easy-to-use Korn shell script that can be run against your STATSPACK tables to identify high-use SQL statements. If you are not using STATSPACK, you can query the v$sql view to get the same information since instance start time.

rpt_sql.ksh

#!/bin/ksh
 
# First, we must set the environment . . . .
ORACLE_SID=readtest
export ORACLE_SID
ORACLE_HOME=`cat /var/opt/oracle/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

echo "How many days back to search?"
read days_back

echo executions
echo loads
echo parse_calls
echo disk_reads
echo buffer_gets
echo rows_processed
echo sorts
echo
echo "Enter sort key:"
read sortkey
 
$ORACLE_HOME/bin/sqlplus perfstat/perfstat<<!

 
set array 1;
set lines 80;
set wrap on;
set pages 999;
set echo off;
set feedback off;

column mydate      format a8
column exec        format 9,999,999
column loads       format 999,999
column parse       format 999,999
column reads       format 9,999,999
column gets        format 9,999,999
column rows_proc   format 9,999,999
 column sorts       format 999,999

drop table temp1;
create table temp1 as
   select min(snap_id) min_snap
   from stats\$snapshot where snap_time > sysdate-$days_back;

drop table temp2;

create table temp2 as
select
   to_char(snap_time,'dd Mon HH24:mi:ss') mydate,
   executions                             exec,
   loads                                  loads,
   parse_calls                            parse,
   disk_reads                             reads,
   buffer_gets                            gets,
   rows_processed                         rows_proc,
   sorts                                  sorts,
   sql_text
from
   perfstat.stats\$sql_summary sql,
   perfstat.stats\$snapshot     sn
where
   sql.snap_id >
   (select min_snap from temp1)
and
   sql.snap_id = sn.snap_id
order by $sortkey desc
;
spool off;

select * from temp2 where rownum < 11;

exit
!

Here is the listing from running this valuable script. Note that the Remote DBA is prompted as to how many days back to search, and the sort key for extracting the SQL.

rpt_sql.ksh

How many days back to search?
7
executions
loads
parse_calls
disk_reads
buffer_gets
rows_processed
sorts

Enter sort key:
disk_reads

SQL*Plus: Release 8.1.6.0.0 - Production on Thu Dec 14 09:14:46 2000

(c) Copyright 1999 Oracle Corporation. All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.6.1.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.6.1.0 - 64bit Production

MYDATE         EXEC    LOADS    PARSE      READS       GETS  ROWS_PROC    SORTS
-------- ---------- -------- -------- ---------- ---------- ---------- -----SQL_TEXT
----------------------------------------------------------------------------11 Dec 1        866        1      866    246,877  2,795,211        865        4:00:09
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;  broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

11 Dec 1        863        1      863    245,768  2,784,834        862        1:00:29
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;  broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;

11 Dec 1        866        1      866    245,325    597,647    129,993      866
4:00:09
INSERT INTO STATS$SQL_SUMMARY ( SNAP_ID,DBID,INSTANCE_NUMBER,SQL_TEXT,SHARABLE_M
EM,SORTS,MODULE,LOADED_VERSIONS,EXECUTIONS,LOADS,INVALIDATIONS,PARSE_CALLS,DISK_
READS,BUFFER_GETS,ROWS_PROCESSED,ADDRESS,HASH_VALUE,VERSION_COUNT )  SELECT MIN(
:b1),MIN(:b2),MIN(:b3),MIN(SQL_TEXT),SUM(SHARABLE_MEM),SUM(SORTS),MIN(MODULE),SU
M(LOADED_VERSIONS),SUM(EXECUTIONS),SUM(LOADS),SUM(INVALIDATIONS),SUM(PARSE_CALLS
),SUM(DISK_READS),SUM(BUFFER_GETS),SUM(ROWS_PROCESSED),ADDRESS,HASH_VALUE,COUNT(
1)   FROM V$SQL  GROUP BY ADDRESS,HASH_VALUE  HAVING (SUM(BUFFER_GETS) > :b4  OR
 SUM(DISK_READS) > :b5  OR SUM(PARSE_CALLS) > :b6  OR SUM(EXECUTIONS) > :b7 )

11 Dec 0        861        1      861    245,029  2,778,052        860        0
9:00:24
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;  broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELSE
E :b := 0; END IF; END;

11 Dec 1        864        1      864    244,587    595,861    129,605      864
2:00:02
INSERT INTO STATS$SQL_SUMMARY ( SNAP_ID,DBID,INSTANCE_NUMBER,SQL_TEXT,SHARABLE_M

EM,SORTS,MODULE,LOADED_VERSIONS,EXECUTIONS,LOADS,INVALIDATIONS,PARSE_CALLS,DISK_
READS,BUFFER_GETS,ROWS_PROCESSED,ADDRESS,HASH_VALUE,VERSION_COUNT )  SELECT MIN(
:b1),MIN(:b2),MIN(:b3),MIN(SQL_TEXT),SUM(SHARABLE_MEM),SUM(SORTS),MIN(MODULE),SU
M(LOADED_VERSIONS),SUM(EXECUTIONS),SUM(LOADS),SUM(INVALIDATIONS),SUM(PARSE_CALLS
),SUM(DISK_READS),SUM(BUFFER_GETS),SUM(ROWS_PROCESSED),ADDRESS,HASH_VALUE,COUNT(
1)   FROM V$SQL  GROUP BY ADDRESS,HASH_VALUE  HAVING (SUM(BUFFER_GETS) > :b4  OR
 SUM(DISK_READS) > :b5  OR SUM(PARSE_CALLS) > :b6  OR SUM(EXECUTIONS) > :b7 )

It is interesting to note in the preceding output that we see the STATSPACK insert statement for the stats$sql_summary table.

Next, let’s look at a technique that is probably the most valuable script in this book.


This is an excerpt from "Oracle High-Performance SQL Tuning" by Donald K. Burleson, published by Oracle Press.


Expert Remote DBA

BC is America's oldest and largest Remote DBA Oracle support provider.  Get real Remote DBA experts, call
BC Remote DBA today.

 

 

Remote DBA Service
 

Oracle Tuning Book

 

Advance SQL Tuning Book 

BC Oracle support

Oracle books by Rampant

Oracle monitoring software

 

 

 

 

 

 

BC Remote Oracle Support

Remote DBA

Remote DBA Services

Copyright © 1996 -  2013 by Burleson. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation.



Hit Counter