BC remote Oracle DBA - Call (800) 766-1884  
Oracle Consulting Oracle Training Development

Remote DBA

Remote DBA Plans  

Remote DBA Service

Remote DBA RAC

   
Remote DBA Oracle Home
Remote DBA Oracle Training
Remote DBA SQL Tuning Consulting
Remote DBA Oracle Tuning Consulting
Remote DBA Data Warehouse Consulting
Remote DBA Oracle Project Management
Remote DBA Oracle Security Assessment
Remote DBA Unix Consulting
Burleson Books
Burleson Articles
Burleson Web Courses
Burleson Qualifications
Oracle Links
Remote DBA Oracle Monitoring
Remote DBA Support Benefits
Remote DBA Plans & Prices
Our Automation Strategy
What We Monitor
Oracle Apps Support
Print Our Brochure
Contact Us (e-mail)
Oracle Job Opportunities
Oracle Consulting Prices





   

 

 

 

Remote DBA services

Remote DBA Support

Remote DBA RAC

Remote DBA Reasons

Remote Oracle Tuning

Remote DBA Links

Oracle DBA Support

Oracle DBA Forum

Oracle Disaster

Oracle Training

Oracle Tuning

Oracle Training

 Remote DBA SQL Server

Remote MSSQL Consulting

Oracle DBA Hosting

Oracle License Negotiation

 

 


 

 

 

 

 

 

 

SQL Snapshot Thresholds

Oracle Tips by Burleson Consulting

The snapshot thresholds only apply to the SQL statements that are captured in the stats$sql_summary table. The stats$sql_summary table can easily become the largest table in STATSPACK schema because each snapshot might collect several hundred rows, one for each SQL statement that was in the library cache at the time of the snapshot.

The thresholds are stored in the stats$statspack_parameter table. Let’s take a look at each threshold:

  • executions_th This is the number of executions of the SQL statement (default 100).

  • disk_reads_th This is the number of disk reads performed by the SQL statement (default 1000).

  • parse_calls_th This is the number of parse calls performed by the SQL statement (default 1000).

  • buffer_gets_th This is the number of buffer gets performed by the SQL statement (default 10,000).

It is important to understand that each SQL statement will be evaluated against all of these thresholds, and the SQL statement will be included in the stats$sql_summary table if any one of the thresholds is exceeded. In other words, these thresholds are not AND’ed together as we might expect, but they are OR’ed together such that any value exceeding any of the thresholds will cause a row to be populated.

The main purpose of these thresholds is to control the rapid growth of the stats$sql_summary table that will occur when a highly active database has hundred of SQL statements in the library cache. In the next chapter, we will be discussing clean-up strategies for removing unwanted snapshots from the database.

You can change the threshold defaults by calling the statspack.modify_statspack_parameter function. In the example that follows, we change the default threshold for buffer_gets and disk_reads to 100,000. In all subsequent snapshots, we will only see SQL that exceeds 100,000 buffer gets or disk reads.

     SQL>  execute statspack.modify_statspack_parameter -

            (i_buffer_gets_th=>100000, i_disk_reads_th=>100000);

STATSPACK SQL Top-10 Report

What follows is an easy-to-use Korn shell script that can be run against the STATSPACK tables to identify high-use SQL statements.

rpt_sql.kshc

#!/bin/ksh

# First, we must set the environment . . . .
ORACLE_SID=readtest
export ORACLE_SID
ORACLE_HOME=`cat /var/opt/oracle/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

echo "How many days back to search?"
read days_back

echo executions
echo loads
echo parse_calls
echo disk_reads
echo buffer_gets
echo rows_processed
echo sorts
echo
echo "Enter sort key:"
read sortkey

$ORACLE_HOME/bin/sqlplus perfstat/perfstat<<!


set array 1;
set lines 80;
set wrap on;
set pages 999;
set echo off;
set feedback off;

column mydate      format a8
column exec        format 9,999,999
column loads       format 999,999
column parse       format 999,999
column reads       format 9,999,999
column gets        format 9,999,999
column rows_proc   format 9,999,999
column inval       format 9,999
column sorts       format 999,999

drop table temp1;
create table temp1 as
   select min(snap_id) min_snap
   from stats\$snapshot where snap_time > sysdate-$days_back;

drop table temp2;

create table temp2 as
select
   to_char(snap_time,'dd Mon HH24:mi:ss') mydate,
   executions                             exec,
   loads                                  loads,
   parse_calls                            parse,
   disk_reads                             reads,
   buffer_gets                            gets,
   rows_processed                         rows_proc,
   sorts                                  sorts,
   sql_text
from
   perfstat.stats\$sql_summary sql,
   perfstat.stats\$snapshot     sn
where
   sql.snap_id >
   (select min_snap from temp1)
and
   sql.snap_id = sn.snap_id
order by $sortkey desc
;
spool off;

select * from temp2 where rownum < 11;

exit
!

Here is the listing from running this valuable script. Note that the Remote DBA is prompted as to how many days back to search, and the sort key for extracting the SQL.

rpt_sql.ksh

How many days back to search?
7
executions
loads
parse_calls
disk_reads
buffer_gets
rows_processed
sorts

Enter sort key:
disk_reads

SQL*Plus: Release 8.1.6.0.0 - Production on Thu Dec 14 09:14:46 2000

(c) Copyright 1999 Oracle Corporation. All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.6.1.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.6.1.0 - 64bit Production

MYDATE         EXEC    LOADS    PARSE      READS       GETS  ROWS_PROC    SORTS
-------- ---------- -------- -------- ---------- ---------- ---------- --------
SQL_TEXT
--------------------------------------------------------------------------------
11 Dec 1        866        1      866    246,877  2,795,211        865        0
4:00:09
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;  broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;

11 Dec 1        863        1      863    245,768  2,784,834        862        0
1:00:29
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;  broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;

11 Dec 1        866        1      866    245,325    597,647    129,993      866
4:00:09
INSERT INTO STATS$SQL_SUMMARY ( SNAP_ID,DBID,INSTANCE_NUMBER,SQL_TEXT,SHARABLE_M
EM,SORTS,MODULE,LOADED_VERSIONS,EXECUTIONS,LOADS,INVALIDATIONS,PARSE_CALLS,DISK_
READS,BUFFER_GETS,ROWS_PROCESSED,ADDRESS,HASH_VALUE,VERSION_COUNT )  SELECT MIN(
:b1),MIN(:b2),MIN(:b3),MIN(SQL_TEXT),SUM(SHARABLE_MEM),SUM(SORTS),MIN(MODULE),SU
M(LOADED_VERSIONS),SUM(EXECUTIONS),SUM(LOADS),SUM(INVALIDATIONS),SUM(PARSE_CALLS
),SUM(DISK_READS),SUM(BUFFER_GETS),SUM(ROWS_PROCESSED),ADDRESS,HASH_VALUE,COUNT(
1)   FROM V$SQL  GROUP BY ADDRESS,HASH_VALUE  HAVING (SUM(BUFFER_GETS) > :b4  OR
 SUM(DISK_READS) > :b5  OR SUM(PARSE_CALLS) > :b6  OR SUM(EXECUTIONS) > :b7 )

11 Dec 0        861        1      861    245,029  2,778,052        860        0
9:00:24
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;  broken BOOLEAN :
= FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELS
E :b := 0; END IF; END;

11 Dec 1        864        1      864    244,587    595,861    129,605      864
2:00:02
INSERT INTO STATS$SQL_SUMMARY ( SNAP_ID,DBID,INSTANCE_NUMBER,SQL_TEXT,SHARABLE_M
EM,SORTS,MODULE,LOADED_VERSIONS,EXECUTIONS,LOADS,INVALIDATIONS,PARSE_CALLS,DISK_
READS,BUFFER_GETS,ROWS_PROCESSED,ADDRESS,HASH_VALUE,VERSION_COUNT )  SELECT MIN(
:b1),MIN(:b2),MIN(:b3),MIN(SQL_TEXT),SUM(SHARABLE_MEM),SUM(SORTS),MIN(MODULE),SU
M(LOADED_VERSIONS),SUM(EXECUTIONS),SUM(LOADS),SUM(INVALIDATIONS),SUM(PARSE_CALLS
),SUM(DISK_READS),SUM(BUFFER_GETS),SUM(ROWS_PROCESSED),ADDRESS,HASH_VALUE,COUNT(
1)   FROM V$SQL  GROUP BY ADDRESS,HASH_VALUE  HAVING (SUM(BUFFER_GETS) > :b4  OR
 SUM(DISK_READS) > :b5  OR SUM(PARSE_CALLS) > :b6  OR SUM(EXECUTIONS) > :b7 )

It is interesting to note in the preceding output that we see the STATSPACK insert statement for the stats$sql_summary table.

Conclusion

The STATSPACK utility can be very useful for extracting historical SQL statements and tuning the SQL. Because you many not have the location of the original source SQL statement, optimizer plan stability can be used to tune the SQL without changing the original SQL statement. The main points in this chapter include these:

  • The stats$sql_summary table collects all SQL statements that meet any one of the STATSPACK threshold values.

  • The access.sql script can be easily modified to extract historical SQL by specifying the stats$sql_summary table. You can then run access_reports.sql to get a picture of all full-table scans and table sizes.

  • Once a table that is experiencing large-table full-table scans is located, you can extract the SQL statements from the stats$sql_summary table.

Next, let's conclude this text by taking a look at how to tune SQL statements that utilize built-in functions.


This is an excerpt from "Oracle High-Performance SQL Tuning" by Donald K. Burleson, published by Oracle Press.


Expert Remote DBA

BC is America's oldest and largest Remote DBA Oracle support provider.  Get real Remote DBA experts, call
BC Remote DBA today.

 

 

Remote DBA Service
 

Oracle Tuning Book

 

Advance SQL Tuning Book 

BC Oracle support

Oracle books by Rampant

Oracle monitoring software

 

 

 

 

 

 

BC Remote Oracle Support

Remote DBA

Remote DBA Services

Copyright © 1996 -  2013 by Burleson. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation.



Hit Counter