Compare 9i Statspack & 10g AWR Views

Posted by Sagar Patil
Statspack AWR
STATS$DATABASE_INSTANCE Tables store historical data or snapshots
STATS$IDLE_EVENT
STATS$LEVEL_DESCRIPTION
STATS$ROLLSTAT
STATS$SESSION_EVENT
STATS$SNAPSHOT
STATS$SQL_PLAN_USAGE
STATS$SQL_SUMMARY WRH$_ACTIVE_SESSION_HISTORY WRI$_ADV_ACTIONS
STATS$STATSPACK_PARAMETER WRH$_ACTIVE_SESSION_HISTORY_BL WRI$_ADV_ASA_RECO_DATA
STATS$BG_EVENT_SUMMARY WRH$_BG_EVENT_SUMMARY WRI$_ADV_DEFINITIONS
WRH$_BUFFERED_QUEUES WRI$_ADV_DEF_PARAMETERS
WRH$_BUFFERED_SUBSCRIBERS WRI$_ADV_DIRECTIVES
STATS$BUFFER_POOL_STATISTICS WRH$_BUFFER_POOL_STATISTICS WRI$_ADV_FINDINGS
WRH$_COMP_IOSTAT WRI$_ADV_JOURNAL
WRH$_CR_BLOCK_SERVER WRI$_ADV_MESSAGE_GROUPS
WRH$_CURRENT_BLOCK_SERVER WRI$_ADV_OBJECTS
WRH$_DATAFILE WRI$_ADV_PARAMETERS
STATS$DB_CACHE_ADVICE WRH$_DB_CACHE_ADVICE WRI$_ADV_RATIONALE
WRH$_DB_CACHE_ADVICE_BL WRI$_ADV_RECOMMENDATIONS
STATS$DLM_MISC WRH$_DLM_MISC WRI$_ADV_REC_ACTIONS
WRH$_DLM_MISC_BL WRI$_ADV_SQLA_FAKE_REG
STATS$ENQUEUE_STAT WRH$_ENQUEUE_STAT WRI$_ADV_SQLA_MAP
WRH$_EVENT_NAME WRI$_ADV_SQLA_STMTS
WRH$_FILEMETRIC_HISTORY WRI$_ADV_SQLA_TMP
STATS$FILESTATXS WRH$_FILESTATXS WRI$_ADV_SQLT_BINDS
WRH$_FILESTATXS_BL WRI$_ADV_SQLT_PLANS
STATS$INSTANCE_RECOVERY WRH$_INSTANCE_RECOVERY WRI$_ADV_SQLT_RTN_PLAN
WRH$_INST_CACHE_TRANSFER WRI$_ADV_SQLT_STATISTICS
WRH$_INST_CACHE_TRANSFER_BL WRI$_ADV_SQLW_COLVOL
WRH$_JAVA_POOL_ADVICE WRI$_ADV_SQLW_STMTS
STATS$LATCH WRH$_LATCH WRI$_ADV_SQLW_SUM
WRH$_LATCH_BL WRI$_ADV_SQLW_TABLES
STATS$LATCH_CHILDREN WRH$_LATCH_CHILDREN WRI$_ADV_SQLW_TABVOL
WRH$_LATCH_CHILDREN_BL WRI$_ADV_TASKS
STATS$LATCH_MISSES_SUMMARY WRH$_LATCH_MISSES_SUMMARY WRI$_ADV_USAGE
WRH$_LATCH_MISSES_SUMMARY_BL WRI$_AGGREGATION_ENABLED
WRH$_LATCH_NAME WRI$_ALERT_HISTORY
STATS$LATCH_PARENT WRH$_LATCH_PARENT WRI$_ALERT_OUTSTANDING
WRH$_LATCH_PARENT_BL WRI$_ALERT_THRESHOLD
STATS$LIBRARYCACHE WRH$_LIBRARYCACHE WRI$_ALERT_THRESHOLD_LOG
WRH$_LOG WRI$_DBU_CPU_USAGE
WRH$_METRIC_NAME WRI$_DBU_CPU_USAGE_SAMPLE
WRH$_MTTR_TARGET_ADVICE WRI$_DBU_FEATURE_METADATA
WRH$_OPTIMIZER_ENV WRI$_DBU_FEATURE_USAGE
WRH$_OSSTAT WRI$_DBU_HIGH_WATER_MARK
WRH$_OSSTAT_BL WRI$_DBU_HWM_METADATA
WRH$_OSSTAT_NAME WRI$_DBU_USAGE_SAMPLE
STATS$PARAMETER WRH$_PARAMETER WRI$_OPTSTAT_AUX_HISTORY
WRH$_PARAMETER_BL WRI$_OPTSTAT_HISTGRM_HISTORY
WRH$_PARAMETER_NAME WRI$_OPTSTAT_HISTHEAD_HISTORY
STATS$PGASTAT WRH$_PGASTAT WRI$_OPTSTAT_IND_HISTORY
STATS$PGA_TARGET_ADVICE WRH$_PGA_TARGET_ADVICE WRI$_OPTSTAT_OPR
WRH$_PROCESS_MEMORY_SUMMARY WRI$_OPTSTAT_TAB_HISTORY
STATS$RESOURCE_LIMIT WRH$_RESOURCE_LIMIT WRI$_SCH_CONTROL
STATS$ROWCACHE_SUMMARY WRH$_ROWCACHE_SUMMARY WRI$_SCH_VOTES
WRH$_ROWCACHE_SUMMARY_BL WRI$_SEGADV_CNTRLTAB
WRH$_RULE_SET WRI$_SEGADV_OBJLIST
WRH$_SEG_STAT WRI$_SQLSET_BINDS
WRH$_SEG_STAT_BL WRI$_SQLSET_DEFINITIONS
STATS$SEG_STAT_OBJ WRH$_SEG_STAT_OBJ WRI$_SQLSET_MASK
WRH$_SERVICE_NAME WRI$_SQLSET_PLANS
WRH$_SERVICE_STAT WRI$_SQLSET_PLANS_TOCAP
WRH$_SERVICE_STAT_BL WRI$_SQLSET_PLAN_LINES
WRH$_SERVICE_WAIT_CLASS WRI$_SQLSET_REFERENCES
WRH$_SERVICE_WAIT_CLASS_BL WRI$_SQLSET_STATEMENTS
WRH$_SESSMETRIC_HISTORY WRI$_SQLSET_STATISTICS
STATS$SESSTAT WRH$_SESS_TIME_STATS WRI$_SQLSET_WORKSPACE
STATS$SGA WRH$_SGA WRI$_TRACING_ENABLED
STATS$SEG_STAT , STATS$SGASTAT WRH$_SGASTAT
WRH$_SGASTAT_BL
WRH$_SGA_TARGET_ADVICE
STATS$SHARED_POOL_ADVICE WRH$_SHARED_POOL_ADVICE
STATS$SQL_STATISTICS WRH$_SQLSTAT
WRH$_SQLSTAT_BL
STATS$SQLTEXT WRH$_SQLTEXT
WRH$_SQL_BIND_METADATA
STATS$SQL_PLAN WRH$_SQL_PLAN
WRH$_SQL_SUMMARY
STATS$SQL_WORKAREA_HISTOGRAM WRH$_SQL_WORKAREA_HISTOGRAM
WRH$_STAT_NAME
WRH$_STREAMS_APPLY_SUM
WRH$_STREAMS_CAPTURE
WRH$_STREAMS_POOL_ADVICE
WRH$_SYSMETRIC_HISTORY
WRH$_SYSMETRIC_SUMMARY
STATS$SYSSTAT WRH$_SYSSTAT
WRH$_SYSSTAT_BL
STATS$SYSTEM_EVENT WRH$_SYSTEM_EVENT
WRH$_SYSTEM_EVENT_BL
WRH$_SYS_TIME_MODEL
WRH$_SYS_TIME_MODEL_BL
WRH$_TABLESPACE_SPACE_USAGE
WRH$_TABLESPACE_STAT
WRH$_TABLESPACE_STAT_BL
WRH$_TEMPFILE
STATS$TEMPSTATXS WRH$_TEMPSTATXS
WRH$_THREAD
STATS$UNDOSTAT WRH$_UNDOSTAT
WRH$_WAITCLASSMETRIC_HISTORY
STATS$WAITSTAT WRH$_WAITSTAT
WRH$_WAITSTAT_BL

Statspack Scripts

Posted by Sagar Patil

Installation
The following scripts must be run as a user with SYSDBA privilege:
* SPCREATE.SQL: Creates entire Statspack environment (calls SPCUSR.SQL, SPCTAB.SQL, SPCPKG.SQL) * SPDROP.SQL: Drops entire Statspack environment (calls SPDTAB.SQL, SPDUSR.SQL)

The following scripts are run as a user with SYSDBA privilege by the preceding calling scripts:
* SPDTAB.SQL: Drops Statspack tables
* SPDUSR.SQL: Drops the Statspack user (PERFSTAT)

The following scripts are run as PERFSTAT by the preceding calling scripts:
* SPCUSR.SQL: Creates the Statspack user (PERFSTAT)
* SPCTAB.SQL: Creates Statspack tables
* SPCPKG.SQL: Creates the Statspack package

Scripts for Statspack Reporting and Automation

The following scripts must be run as PERFSTAT:
* SPREPORT.SQL: Generates a Statspack report
* SPREPSQL.SQL: Generates a Statspack SQL report for the specific SQL hash value specified
* SPREPINS.SQL: Generates a Statspack report for the database and instance specified
* SPAUTO.SQL: Automates Statspack statistics collection (using DBMS_JOB)

Scripts for Statspack Performance Data Maintenance
The following scripts must be run as PERFSTAT:
* SPPURGE.SQL: Purges a limited range of Snapshot IDs for a given database instance.
* SPTRUNC.SQL: Truncates all performance data in Statspack tables

Locate Hard hitting SQL from Statpack Repository

Posted by Sagar Patil

1. Login as PERFSTAT user on database.  It won’t work unless U login as PERFSTAT user

2. Find DBID using  “select dbid from stats$sql_summary”

3. Locate MIN(SNAP_ID) pBgnSnap & MAX(SNAP_ID) pEndSnap

SELECT   MIN (snap_id),
 MAX (snap_id),
 MIN (snap_time),
 MAX (snap_time)
 FROM   stats$snapshot
 WHERE       TO_NUMBER (TO_CHAR (snap_time, 'HH24')) > 10
 AND TO_NUMBER (TO_CHAR (snap_time, 'HH24')) < 13
 AND TRUNC (snap_time) = TRUNC (SYSDATE)

Show All SQL Stmts ordered by Logical Reads

SELECT   e.hash_value "E.HASH_VALUE",
 e.module "Module",
 e.buffer_gets - NVL (b.buffer_gets, 0) "Buffer Gets",
 e.executions - NVL (b.executions, 0) "Executions",
 ROUND (
 DECODE (
 (e.executions - NVL (b.executions, 0)),
 0,
 TO_NUMBER (NULL),
 (e.buffer_gets - NVL (b.buffer_gets, 0))
 / (e.executions - NVL (b.executions, 0))
 ),
 3
 )
 "Gets / Execution",
 ROUND (  100
 * (e.buffer_gets - NVL (b.buffer_gets, 0))
 / sp920.getGets (:pDbID,
 :pInstNum,
 :pBgnSnap,
 :pEndSnap,
 'NO'), 3)
 "Percent of Total",
 ROUND ( (e.cpu_time - NVL (b.cpu_time, 0)) / 1000000, 3) "CPU (s)",
 ROUND ( (e.elapsed_time - NVL (b.elapsed_time, 0)) / 1000000, 3)
 "Elapsed (s)",
 ROUND (e.fetches - NVL (b.fetches, 0)) "Fetches",
 sp920.getSQLText (e.hash_value, 400) "SQL Statement"
 FROM   stats$sql_summary e, stats$sql_summary b
 WHERE       b.snap_id(+) = :pBgnSnap
 AND b.dbid(+) = e.dbid
 AND b.instance_number(+) = e.instance_number
 AND b.hash_value(+) = e.hash_value
 AND b.address(+) = e.address
 AND b.text_subset(+) = e.text_subset
 AND e.snap_id = :pEndSnap
 AND e.dbid = :pDbId
 AND e.instance_number = :pInstNum
ORDER BY   3 DESC
Show SQL Stmts where SQL_TEXT like '%'
SELECT   e.hash_value "E.HASH_VALUE",
 e.module "Module",
 e.buffer_gets - NVL (b.buffer_gets, 0) "Buffer Gets",
 e.executions - NVL (b.executions, 0) "Executions",
 ROUND (
 DECODE (
 (e.executions - NVL (b.executions, 0)),
 0,
 TO_NUMBER (NULL),
 (e.buffer_gets - NVL (b.buffer_gets, 0))
 / (e.executions - NVL (b.executions, 0))
 ),
 3
 )
 "Gets / Execution",
 ROUND (  100
 * (e.buffer_gets - NVL (b.buffer_gets, 0))
 / sp920.getGets (:pDbID,
 :pInstNum,
 :pBgnSnap,
 :pEndSnap,
 'NO'), 3)
 "Percent of Total",
 ROUND ( (e.cpu_time - NVL (b.cpu_time, 0)) / 1000000, 3) "CPU (s)",
 ROUND ( (e.elapsed_time - NVL (b.elapsed_time, 0)) / 1000000, 3)
 "Elapsed (s)",
 ROUND (e.fetches - NVL (b.fetches, 0)) "Fetches",
 sp920.getSQLText (e.hash_value, 400) "SQL Statement"
 FROM   stats$sql_summary e, stats$sql_summary b
 WHERE       b.snap_id(+) = :pBgnSnap
 AND b.dbid(+) = e.dbid
 AND b.instance_number(+) = e.instance_number
 AND b.hash_value(+) = e.hash_value
 AND b.address(+) = e.address
 AND b.text_subset(+) = e.text_subset
 AND e.snap_id = :pEndSnap
 AND e.dbid = 2863128100
 AND e.instance_number = :pInstNum
 AND sp920.getSQLText (e.hash_value, 400) LIKE '%ZPV_DATA%'
ORDER BY   3 DESC

Locate Server Workload from Statspack for days in Past

Posted by Sagar Patil

Change a.statistic# to respective value

Stats for Working Hours

select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A, stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and to_char(b.SNAP_TIME,'HH24') > 8
and to_char(b.SNAP_TIME,'HH24') <18
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)

Locate kind of stats you want to pull from statspack

(select * from STATS$SYSSTAT where name like '%XXX%' )
9 session logical reads
Physical Reads
54 Physical Reads
56 Physical reads direct
58 physical read bytes
39 physical read total bytes
42 physical write total bytes
66 physical write bytes
66 physical writes
CPU Related
355 OS Wait-cpu (latency) time
328 parse time cpu
8 recursive cpu usage
Rollback Related -
176 transaction tables consistent read rollbacks
180 rollbacks only - consistent read gets
181 cleanouts and rollbacks - consistent read gets
187 transaction rollbacks
5 user rollbacks
239 IMU CR rollbacks
186 rollback changes - undo records applied
Sample Report built using SQL in post Stats_Report

Stats for Entire Day

select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A, stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)

What is causing Redo / Writes/ Log Creation- Statspack 8i/9i

Posted by Sagar Patil

Physical WRITES from the time Instance Started

Read more…

How to Install Statspack (8i/9i/10g)

Posted by Sagar Patil

Read more…

Locate Server Workload for a Defined Timeframe

Posted by Sagar Patil

Retrieve Stats for a Day from Statspack tables

select  to_char ( trunc ( b.snap_time )
 , 'DD-MM-YYYY' )
 , statistic#
 , name
 , sum ( value )
 from STATS$SYSSTAT A
 , stats$snapshot B
 where a.snap_id = b.snap_id
 and trunc ( b.snap_time ) > trunc ( sysdate - 30 )
 and a.statistic# = 54 < Replace
 with Stats
 Number Below > group
 by trunc ( b.snap_time )
 , statistic#
 , name
 order by trunc ( b.snap_time );

Please replace statistic from Type of stats

Select *
 from STATS$SYSSTAT
 where name like '%XXX%'

9 session logical reads
Physical Reads
54 Physical Reads
56 Physical reads direct
58 physical read bytes
39 physical read total bytes
42 physical write total bytes
66 physical write bytes
66 physical writes
CPU Related
355 OS Wait-cpu (latency) time
328 parse time cpu
8 recursive cpu usage

Rollback Related –
176 transaction tables consistent read rollbacks
180 rollbacks only – consistent read gets
181 cleanouts and rollbacks – consistent read gets
187 transaction rollbacks
5 user rollbacks
239 IMU CR rollbacks
186 rollback changes – undo records applied

How to find current Statspack level?

Posted by Sagar Patil

1. Look at table PERFSTST.STATS$SNAPSHOT
2. Run spreport.sql and you will notice it along with the snapids listed

Change Level
execute statspack.snap (i_snap_level=> 7, i_modify_parameter=>’true’);

Levels >= 0 General Performance Statistics
Any level greater than 0 collects general performance statistics, such as wait statistics, system events, system statistics, rollback segment data, row cache, SGA, background events, session events, lock statistics, buffer pool statistics, and parent latch statistics.

Levels >= 5 Additional Data: SQL Statements
This level includes all statistics gathered in the lower level(s), as well as performance data on SQL statements with high resource usage. In a level 5 snapshot (or higher), the time required for the snapshot to complete depends on the SHARED_POOL_SIZE and on the number of SQL statements in the shared pool at the time of the snapshot. The larger the shared pool, the longer it takes to complete the snapshot.

Levels >= 6 Additional Data: SQL Plans and SQL Plan Usage
This level includes all statistics gathered in the lower levels, as well a SQL plans and plan usage data for each of the high-resource SQL statements captured.
A level 6 snapshot gathers valuable information for determining whether the execution plan used for a SQL statement has changed. Therefore, level 6 snapshots should be used whenever a plan might have changed.

Levels >= 10 Additional Statistics: Parent and Child Latches
This level includes all statistics gathered in the lower levels, as well as parent and child latch information. Sometimes data gathered at this level can cause the snapshot to take longer to complete. This level can be resource-intensive, and it should only be used when advised by Oracle personnel.

Statspack threshold parameters:
* Number of executions of the SQL statement (default 100)
* Number of disk reads performed by the SQL statement (default 1,000)
* Number of parse calls performed by the SQL statement (default 1,000)
* Number of buffer gets performed by the SQL statement (default 10,000)
* Size of sharable memory used by the SQL statement (default 1 Mb)
* Version count for the SQL statement (default 20)

The SQL threshold levels used are either those stored in the table STATS$STATSPACK_PARAMETER or by the thresholds specified when the snapshot is taken.

How to define statspack level ?
SQL>  EXECUTE STATSPACK.SNAP(i_snap_level=>6);

Gather session statistics and wait events for a particular session
SQL>  EXECUTE STATSPACK.SNAP(i_session_id=>3);

Set new value as instance’s default
SQL>  EXECUTE STATSPACK.SNAP(i_snap_level=>10, i_modify_parameter=>’true’);

A database is creating more than 100 logs every single hour. What is happening ?

Posted by Sagar Patil

Solution: I tried looking into statspack report but it wouldn’t flag the insert/update/delete activity. Also statspack won’t record some of system activity which may contribute to redo logs.Only option to use was “LOGMINER”.

1. Start TOAD and click on DBA-> logminer

2. Select ftp directory else local files to mine

3. Enter FTP details of server

4. Look into file timestamp and select required files

5. I have selected nmst_0000000700.arc file

6. Click on “options” and Select options to display in TOAD window

7. Now click on Green arrow and logminer will start reading log files

8. Please be patient for some time and you would see a report

Top of Page

Top menu