Database hanging due to TX-Transaction (row lock contention)

Posted by Sagar Patil

My Statspack has following details.

Enqueue activity DB/Inst: Snaps: 1791-1808
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc

Above list indicates oracle was waiting about 4 secs to acquire a lock. Let’s look at row contention and enqueues closely.

I was looking on google and stumbled across exceptional article “Diagnose performance problems, using the wait interface in Oracle 10g.”
John, the DBA at Acme Bank, is on the phone with an irate user, Bill, who complains that his database session is hanging, a complaint not unfamiliar to most DBAs. What can John do to address Bill’s complaint?
Acme Bank’s database is Oracle Database 10g, so John has many options. Automatic Database Diagnostic Manager (ADDM), new in Oracle Database 10g, can tell John about the current overall status and performance of the database, so John starts with ADDM to determine whether what Bill’s session is experiencing is the result of a databasewide issue. The ADDM report identifies no databasewide issues that could have this impact on Bill’s session, so John moves on to the next option.

One way to diagnose session-level events such as Bill’s is to determine whether the session is waiting for anything, such as the reading of a block of a file, a lock on a table row, or a latch. Oracle has provided mechanisms to display the waits happening inside the database since Oracle7, and during the last several years, the model has been steadily perfected, with more and more diagnostic information added to it. In Oracle Database 10g, which makes significantly improved wait event information available, diagnosing a session slowdown has become even easier. This article shows you how to use the wait events in Oracle Database 10g to identify bottlenecks.

Session Waits : How can John the DBA determine what’s causing Bill’s session to hang? Actually, the session is not hanging; it’s waiting for an event to happen, and that’s exactly what John checks for.

To continue his investigation, John could use Oracle Enterprise Manager or he could directly access V$ views from the command line. John has a set of scripts he uses to diagnose these types of problems, so he uses the command line.

John queries the V$SESSION view to see what Bill’s session is waiting for. (Note that John filters out all idle events.)

select sid, username, event, blocking_session,
seconds_in_wait, wait_time
from v$session where state in (‘WAITING’)
and wait_class != ‘Idle’;

The output follows, in vertical format.
SID : 270
USERNAME : BILL
EVENT : enq: TX – row lock
contention
BLOCKING_SESSION : 254
SECONDS_IN_WAIT : 83
WAIT_TIME : 0

Looking at this information, John immediately concludes that Bill’s session with SID 270 is waiting for a lock on a table and that that lock is held by session 254 (BLOCKING_SESSION).

But John wants to know which SQL statement is causing this lock. He can find out easily, by issuing the following query joining the V$SESSION and V$SQL views:

select sid, sql_text
from v$session s, v$sql q
where sid in (254,270)
and (
q.sql_id = s.sql_id or
q.sql_id = s.prev_sql_id);

Listing 1 shows the result of the query. And there (in Listing 1) John sees it?both sessions are trying to update the same row. Unless session 254 commits or rolls back, session 270 will continue to wait for the lock. He explains this to Bill, who, considerably less irate now, decides that something in the application has gone awry and asks John to kill session 254 and release the locks.

Wait Classes After John kills the blocking session, Bill’s session continues but is very slow. John decides to check for other problems in the session. Again, he checks for any other wait events, but this time he specifically checks Bill’s session.

In Oracle Database 10g, wait events are divided into various wait classes, based on their type. The grouping of events lets you focus on specific classes and exclude nonessential ones such as idle events. John issues the following against the V$SESSION_WAIT_CLASS view:

select wait_class_id, wait_class,
total_waits, time_waited
from v$session_wait_class
where sid = 270;

The output, shown in Listing 2, shows the wait classes and how many times the session has waited for events in each class. It tells John that application-related waits such as those due to row locks have occurred 17,760 times, for a total of 281,654 centiseconds (cs)?hundredths of a second?since the instance started. John thinks that this TIME_WAITED value is high for this session. He decides to explore the cause of these waits in the application wait class. The times for individual waits are available in the V$SYSTEM_EVENT view. He issues the following query to identify individual waits in the application wait class (class id 4217450380):

select event, total_waits, time_waited
from v$system_event e, v$event_name n
where n.event_id = e.event_id
and wait_class_id = 4217450380;

Listing 3 shows the output of this query. It shows that lock contentions (indicated by the event enq: TX – row lock contention) constitute the major part of the waiting time in the application wait class. This concerns John. Is it possible that a badly written application made its way through to the production database, causing these lock contention problems?

Being the experienced DBA that he is, however, John does not immediately draw that conclusion. The data in Listing 3 merely indicates that the users have experienced lock-contention-related waits a total of 2,275 times, for 280,856 cs. It is possible that mostly 1- or 2-cs waits and only one large wait account for the total wait time, and in that case, the application isn’t faulty. A single large wait may be some freak occurrence skewing the data and not representative of the workload on the system. How can John determine whether a single wait is skewing the data?

Oracle 10g provides a new view, V$EVENT_HISTOGRAM, that shows the wait time periods and how often sessions have waited for a specific time period. He issues the following against V$EVENT_HISTOGRAM:

select wait_time_milli bucket, wait_count
from v$event_histogram
where event =
‘enq: TX – row lock contention’;
The output looks like this:

BUCKET WAIT_COUNT
———– ———-
1 252
2 0
4 0
8 0
16 1
32 0
64 4
128 52
256 706
512 392
1024 18
2048 7
4096 843

The V$EVENT_HISTOGRAM view shows the buckets of wait times and how many times the sessions waited for a particular event?in this case, a row lock contention?for that duration. For example, sessions waited 252 times for less than 1 millisecond (ms), once less than 16
ms but more than 1 ms, and so on. The sum of the values of the WAIT_COUNT column is 2,275, the same as the value shown in the event enq: TX – row lock contention, shown in Listing 3. The V$EVENT_HISTOGRAM view shows that the most waits occurred in the ranges of 256 ms, 512 ms, and 4,096 ms, which is sufficient evidence that the applications are experiencing locking issues and that this locking is the cause of the slowness in Bill’s session. Had the view showed numerous waits in the 1-ms range, John wouldn’t have been as concerned, because the waits would have seemed normal.

Time Models : Just after John explains his preliminary findings to Bill, Lora walks in with a similar complaint: Her session SID 355 is very slow. Once again, John looks for the events the session is waiting for, by issuing the following query against the V$SESSION_WAIT view:

select event, seconds_in_wait,
wait_time
from v$session_wait
where sid = 355;

The output, shown in Listing 4, shows a variety of wait events in Lora’s session, including latch contention, which may be indicative of an application design problem. But before he sends Lora off with a prescription for an application change, John must support his theory that bad application design is the cause of the poor performance in Lora’s session. To test this theory, he decides to determine whether the resource utilization of Lora’s session is extraordinarily high and whether it slows not only itself but other sessions too.

In the Time Model interface of Oracle Database 10g, John can easily view details of time spent by a session in various activities. He issues the following against the V$SESS_TIME_MODEL view:

select stat_name, value
from v$sess_time_model
where sid = 355;

The output, shown in Listing 5, displays the time (in microseconds) spent by the session in various places. From this output, John sees that the session spent 503,996,336 microseconds parsing (parse time elapsed), out of a total of 878,088,366 microseconds on all SQL execution (sql execute elapsed time), or 57 percent of the SQL execution time, which indicates that a cause of this slowness is high parsing. John gives Lora this information, and she follows up with the application design team.

OS Statistics :While going over users’ performance problems, John also wants to rule out the possibility of the host system’s being a bottleneck. Before Oracle 10g, he could use operating system (OS) utilities such as sar and vmstat and extrapolate the metrics to determine contention. In Oracle 10g, the metrics at the OS level are collected automatically in the database. To see potential host contention, John issues the following query against the V$OSSTAT view:

select * from v$osstat;

The output in Listing 6 shows the various elements of the OS-level metrics collected. All time elements are in cs. From the results in Listing 6, John sees that the single CPU of the system has been idle for 51,025,805 cs (IDLE_TICKS) and busy for 2,389,857 cs (BUSY_TICKS), indicating a CPU that is about 4 percent busy. From this he concludes that the CPU is not a bottleneck on this host. Note that if the host system had more than one CPU, the columns whose headings had the prefix AVG_, such as AVG_IDLE_TICKS, would show the average of these metrics over all the CPUs.

Active Session History: So far the users have consulted John exactly when each problem occurred, enabling him to peek into the performance views in real time. This good fortune doesn’t last long?Janice comes to John complaining about a recent performance problem. When John queries the V$SESSION view, the session is idle, with no events being waited for. How can John check which events Janice’s session was waiting for when the problem occurred?

Oracle 10g collects the information on active sessions in a memory buffer every second. This buffer, called Active Session History (ASH), which can be viewed in the V$ACTIVE_SESSION_HISTORY dynamic performance view, holds data for about 30 minutes before being overwritten with new data in a circular fashion. John gets the SID and SERIAL# of Janice’s session and issues this query against the V$ACTIVE_SESSION_HISTORY view to find out the wait events for which this session waited in the past.

select sample_time, event, wait_time
from v$active_session_history
where session_id = 271
and session_serial# = 5;

The output, excerpted in Listing 7, shows several important pieces of information. First it shows SAMPLE_TIME?the time stamp showing when the statistics were collected?which lets John tie the occurrence of the performance problems to the wait events. Using the data in the V$ACTIVE_SESSION_HISTORY view, John sees that at around 3:17 p.m., the session waited several times for the log buffer space event, indicating that there was some problem with redo log buffers. To further aid the diagnosis, John identifies the exact SQL statement executed by the session at that time, using the following query of the V$SQL view:

select sql_text, application_wait_time
from v$sql
where sql_id in (
select sql_id
from v$active_session_history
where sample_time =
’22-FEB-04 03.17.31.188 PM’
and session_id = 271
and session_serial# = 5
);

The output is shown in Listing 8.
The column APPLICATION_WAIT_TIME shows how long the sessions executing that SQL waited for the application wait class. In addition to the SQL_ID, the V$ACTIVE_SESSION_HISTORY view also lets John see specific rows being waited for (in case of lock contentions), client identifiers, and much more.

What if a user comes to John a little late, after the data is overwritten in this view? When purged from this dynamic performance view, the data is flushed to the Active Workload Repository (AWR), a disk-based repository. The purged ASH data can be seen in the DBA_HIST_ACTIVE_SESSION_HIST view, enabling John to see the wait events of a past session. The data in the AWR is purged by default after seven days.

Conclusion : Oracle Database 10g introduces a number of enhancements designed to automate and simplify the performance diagnostic process. Wait event information is more elaborate in Oracle Database 10g and provides deeper insight into the cause of problems, making the diagnosis of performance problems a breeze in most cases, especially in proactive performance tuning.

Identifying Blocking Locks/ Contention & Resolving Database Hanging Issue

Posted by Sagar Patil

One of my 10.2.0.1 database was hanging despite allocating ample opf resources. The CPU, Memory,Disk accesses were normal but under heavy load server would freeze for no valid reason. I had to raise TAR with Oracle.

My Statspack report had following details

—–Enqueue activity DB/Inst:
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc

Enqueue Type (Request Reason)
——————————————————————————
Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms)
———— ———— ———– ———– ———— ————–
TX-Transaction (row lock contention)
6,611 6,615 0 6,615 32,521 4,916.24

Above list indicates oracle was waiting about 4 secs to get lock . Enqueue is a SGA memory structure used by Oracle.

Oracle support suggested to locate hot sql and hot block like table/index/cluster to reoslve this issue.

There are some articles on google but this PDF “Resolving_Oracle_Latch_Contention.pdf” from Guy Harrison at Quest is interesting …

Avoiding Hot Blocks
Cache buffers chains latch contention is one of the most intractable types of latch contention. There are a couple of things you can do at the application level to reduce the severity of this type of contention.
Firstly, identify the blocks that are hot.? Metalink note 163424.1, “How to Identify a Hot Block Within The Database”? describes how to do this. Having identified the identity of the hot block, you will most likely find that it is an index root or branch block.

It was suggested by Oracle support to look at Locks and provide them historical information about it. I couldn’t see any blokcing locks i TOAD/ deadlocks at alrtlog so I was surelocks are not the isse but I had to do what Oracle suypport suggested. So here is what I did.

Unix Shell script Process Stack diagnosis.
#!/usr/bin/ksh
# Create Oracle Trace
# Script 3 : Get Dump of Pmon, Smon & lgwr processes
# Pmon,Smon & Lgwr script files created in same directory
# Please use ./option3.sh <SID> for results
#################################################################################
export PID_PMON=$(ps -ef | grep ora_pmon_$1 | sed -e ‘/pts/d’ | awk ‘{print $2}’)
export PID_SMON=$(ps -ef | grep ora_smon_$1 | sed -e ‘/pts/d’ | awk ‘{print $2}’)
export PID_LGWR=$(ps -ef | grep ora_lgwr_$1 | sed -e ‘/pts/d’ | awk ‘{print $2}’)
echo $PID_PMON
echo $PID_SMON
echo $PID_LGWR
/usr/bin/pstack $PID_PMON > $1_PMON.log
/usr/bin/pstack $PID_SMON > $1_SMON.log
/usr/bin/pstack $PID_LGWR > $1_LGWR.log

# SQL script to carry system state dump
# Please execute this file by connecting to background sqlplus sessoion on live
# Once done please upload trace files created at $ORACLE_HOME/UDUMP to Oracle support
set term off;
set scan off;
set define off;
alter session set max_dump_file_size=unlimited;
alter session set events ‘immediate trace name systemstate level 266’;
exec sys.dbms_lock.sleep(60);
alter session set max_dump_file_size=unlimited;
alter session set events ‘immediate trace name systemstate level 266’;
exec sys.dbms_lock.sleep(60);
alter session set max_dump_file_size=unlimited;
alter session set events ‘immediate trace name systemstate level 266’;
exec sys.dbms_lock.sleep(60);
alter session set max_dump_file_size=unlimited;
alter session set events ‘immediate trace name systemstate level 266’;
set term on;
set scan on;
set define on;

I have also created a small pl/sql procedure to locate blocking locks on system. This procedure will create a txt spool file with relevant details. Monitor these spool files over period of time to see which is a hot object.

CREATE OR REPLACE procedure locate_blocking_sid as
t_sid number;
t_sql_text varchar2(200);
msg varchar2(2000);
maxlinesize NUMBER := 32767;
log_file_handle UTL_FILE.file_type;
cursor c1 is
select sid, blocking_session,event, seconds_in_wait,wait_time
from sys.v_$session where state in (‘WAITING’)
and wait_class != ‘Idle’;
c1_rec c1%ROWTYPE;

cursor C2 (t_sid VARCHAR2, t_blocking_session varchar2) is
select ‘ ‘|| sql_text || sid
from sys.v_$session s, v$sql q
where sid in (t_sid,t_blocking_session)
and (
q.sql_id = s.sql_id or
q.sql_id = s.prev_sql_id);

BEGIN
log_file_handle := UTL_FILE.fopen(‘/oracle/app/oracle/admin/startp/udump’,’blocking_locks ‘||to_char(sysdate,’DD-MM-YY HH24:MI’)||’.txt’,’w’,maxlinesize);
— log_file_handle := UTL_FILE.fopen(‘/oracle/app/oracle/admin/catp/udump’,’blocking_locks’,’w’);
UTL_FILE.PUT_LINE(log_file_handle, ‘ SQL Running & V$ Session ID ;’|| ‘ Event for Wait ;’||’ Seconds in Wait ;’ || ‘ Wait Time’);
open c1;
loop
FETCH c1 INTO c1_rec;
EXIT WHEN c1%NOTFOUND;

BEGIN
open c2(c1_rec.sid,c1_rec.blocking_session);
Loop
FETCH c2 into msg;
msg := msg || ‘ ; ‘|| trim(c1_rec.event)|| ‘ ;’|| c1_rec.seconds_in_wait||’ ; ‘||c1_rec.wait_time;
EXIT WHEN c2%NOTFOUND;
UTL_FILE.PUT_LINE(log_file_handle,msg );
End Loop;
close c2;
END;
end loop;
close c1;
UTL_FILE.FCLOSE(log_file_handle);
end;
/

AWR : How to run Reports

Posted by Sagar Patil

The architecture of the AWR is quite simple. The MMON background process polls the x$ fixed tables from the SGA region and stores them in the AWR tables. From there, the performance data is instantly available for analysis. The Enterprise Manager can be used for graphical data display. Alternatively, the Automatic Database Diagnostic Monitor (ADDM) can be used for automated tuning analysis, or SQL*Plus can be used if customized Oracle tuning size reports are desired.

AWR interval could be changed using API :
execute dbms_workload_repository.modify_snapshot_settings (
interval => 60,
retention => 43200);
Interval : How often snaps are taken
Retention: Duration in Minutes

Values updated at “dba_hist_wr_control”

AWR Reports

AWR reports are very similar to STATSPACK reports. They typically show:
SQL> @?/rdbms/admin/awrrpt.sql

A nice feature is that an HTML version of the report can be generated. Just follow the prompts. SELECT_ANY_DICTIONARY privilege is required to run a report.

Managing Snapshots
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT
DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT
DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE

DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS
retention_period => 0
interval => 0

AWR Views

DBA_HIST_SNAPSHOT # show all snapshots
DBA_HIST_WR_CONTROL # show AWR settings

Display execution plan in SQLPLUS – plan_table script

Posted by Sagar Patil

I was trying to execute following SQL through client side tools and found network delays were contributing to the SQl execution times. This SQL normally returns in less than 50 millisec and now taking more than 700 Msec. I am addicted to TOAD -TORA GUI created Execution plans and was looking for a an easy way to get execution times as well as plans on sqlplus. Here is how you do it .

SQLPLUS> set autotrace on; – Will display formatted explain plan

SQLPLUS> set timing on;
– Oracle will automatically display the elapsed wall-clock time for each SQL command you run subsequently.
– The format is HH:MM:SS.hundredths of a second for example 00:00:00.81 is .81 seconds

If you get error like “PLAN_TABLE not found” use $ORACLE_HOME/utlxplan to create PLAN_TABLE and pass on access to relevant user.

SQLPLUS> set autotrace on;
SQLPLUS> set timing on;
SQL> select count(*) from voucher;

COUNT(*)
———-
144189820

Elapsed: 00:01:55.05 format is HH:MM:SS.hundredths of a second

Execution Plan
———————————————————-
0      SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=3 Card=1)
1    0   SORT (AGGREGATE)
2    1     INDEX (FAST FULL SCAN) OF ‘VOUCHER_P0001001’ (UNIQUE) (Cost=3 Card=126665996)

Statistics
———————————————————-
0  recursive calls
0  db block gets
622158  consistent gets
621830  physical reads
0  redo size
522  bytes sent via SQL*Net to client
651  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
1  rows processed

PLAN_TABLE is the default table for results of the EXPLAIN PLAN statement. It is created by $ORACLE_HOME/rdbms/admin/utlxplan.sql, and it contains one row for each step in the execution plan.

For 10G:

create table PLAN_TABLE (
statement_id varchar2(30),
plan_id number,
timestamp date,
remarks varchar2(4000),
operation varchar2(30),
options varchar2(255),
object_node varchar2(128),
object_owner varchar2(30),
object_name varchar2(30),
object_alias varchar2(65),
object_instance numeric,
object_type varchar2(30),
optimizer varchar2(255),
search_columns number,
id numeric,
parent_id numeric,
depth numeric,
position numeric,
cost numeric,
cardinality numeric,
bytes numeric,
other_tag varchar2(255),
partition_start varchar2(255),
partition_stop varchar2(255),
partition_id numeric,
other long,
distribution varchar2(30),
cpu_cost numeric,
io_cost numeric,
temp_space numeric,
access_predicates varchar2(4000),
filter_predicates varchar2(4000),
projection varchar2(4000),
time numeric,
qblock_name varchar2(30)
);

Column Datatype Description
STATEMENT_ID VARCHAR2(30) Optional statement identifier specified in the EXPLAIN PLAN statement
TIMESTAMP DATE Date and time when the EXPLAIN PLAN statement was issued
REMARKS VARCHAR2(80) Place for comments that can be added to the steps of the execution plan
OPERATION VARCHAR2(30) Name of the operation performed at this step
OPTIONS VARCHAR2(255) Options used for the operation performed at this step; the string returned by the index cost function
OBJECT_NODE VARCHAR2(128) Name of the database link used to reference the object
OBJECT_OWNER VARCHAR2(30) Owner of the object
OBJECT_NAME VARCHAR2(30) Name of the object
OBJECT_INSTANCE NUMBER(38) Numbered position of the object name in the original SQL statement
OBJECT_TYPE VARCHAR2(30) Descriptive modifier that further describes the type of object
OPTIMIZER VARCHAR2(255) Current mode of the optimizer
SEARCH_COLUMNS NUMBER Number of index columns with start and stop keys (that is, the number of columns with matching predicates)
ID NUMBER(38) Identification number for this step in the execution plan
PARENT_ID NUMBER(38) ID of the next step that operates on the results of this step
POSITION NUMBER(38) Order of processing for steps with the same parent ID. For cost-based optimization, the value in the first row of the plan is the statement’s execution cost. For rule-based optimization, the value is null in the first row.
COST NUMBER(38) Cost of the current operation estimated by the cost-based optimizer (CBO)
CARDINALITY NUMBER(38) Number of rows returned by the current operation (estimated by the CBO)
BYTES NUMBER(38) Number of bytes returned by the current operation
OTHER_TAG VARCHAR2(255) Describes the function of the SQL text in the OTHER column. Values for OTHER_TAG are: 

· SERIAL – SQL is the text of a locally-executed, serial query plan. Currently, SQL is not loaded in OTHER for this case.

· SERIAL_FROM_REMOTE – SQL text shown in the OTHER column will be executed at a remote site

· PARALLEL_COMBINED_WITH_PARENT – Parent of this operation is a DFO that performs both operations in the parallel execution plan

· PARALLEL_COMBINED_WITH_CHILD – Child of this operation is a DFO that performs both operations in the parallel execution plan.

· PARALLEL_TO_SERIAL – SQL text shown in the OTHER column is the top-level of the parallel plan.

· PARALLEL_TO_PARALLEL – SQL text shown in the OTHER column is executed and output in parallel

· PARALLEL_FROM_SERIAL – Operation consumes data from a serial operation and outputs it in parallel

PARTITION_START VARCHAR2(255) Start partition of a range of accessed partitions
PARTITION_STOP VARCHAR2(255) Stop partition of a range of accessed partitions
PARTITION_ID NUMBER(38) Step that has computed the pair of values of the PARTITION_START and PARTITION_STOP columns
OTHER LONG Information about parallel execution slaves and parallel queries
DISTRIBUTION VARCHAR2(30) Distribution method
CPU_COST NUMBER(38) User-defined CPU cost
IO_COST NUMBER(38) User-defined I/O cost
TEMP_SPACE NUMBER(38) Temporary space usage of the operation (sort or hash-join) as estimated by the optimizer’s cost-based approach. For statements that use the rule-based approach, this column is null.
ACCESS_PREDICATES VARCHAR2(4000) Predicates used to locate rows in an access structure. For example, start or stop predicates for an index range scan.
FILTER_PREDICATES VARCHAR2(4000) Predicates used to filter rows before producing them
PROJECTION VARCHAR2(4000) Expressions produced by the operation
TIME NUMBER(38) Elapsed time (in seconds) of the operation as estimated by the optimizer’s cost-based approach. For statements that use the rule-based approach, this column is null.

Compare 9i Statspack & 10g AWR Views

Posted by Sagar Patil
Statspack AWR
STATS$DATABASE_INSTANCE Tables store historical data or snapshots
STATS$IDLE_EVENT
STATS$LEVEL_DESCRIPTION
STATS$ROLLSTAT
STATS$SESSION_EVENT
STATS$SNAPSHOT
STATS$SQL_PLAN_USAGE
STATS$SQL_SUMMARY WRH$_ACTIVE_SESSION_HISTORY WRI$_ADV_ACTIONS
STATS$STATSPACK_PARAMETER WRH$_ACTIVE_SESSION_HISTORY_BL WRI$_ADV_ASA_RECO_DATA
STATS$BG_EVENT_SUMMARY WRH$_BG_EVENT_SUMMARY WRI$_ADV_DEFINITIONS
WRH$_BUFFERED_QUEUES WRI$_ADV_DEF_PARAMETERS
WRH$_BUFFERED_SUBSCRIBERS WRI$_ADV_DIRECTIVES
STATS$BUFFER_POOL_STATISTICS WRH$_BUFFER_POOL_STATISTICS WRI$_ADV_FINDINGS
WRH$_COMP_IOSTAT WRI$_ADV_JOURNAL
WRH$_CR_BLOCK_SERVER WRI$_ADV_MESSAGE_GROUPS
WRH$_CURRENT_BLOCK_SERVER WRI$_ADV_OBJECTS
WRH$_DATAFILE WRI$_ADV_PARAMETERS
STATS$DB_CACHE_ADVICE WRH$_DB_CACHE_ADVICE WRI$_ADV_RATIONALE
WRH$_DB_CACHE_ADVICE_BL WRI$_ADV_RECOMMENDATIONS
STATS$DLM_MISC WRH$_DLM_MISC WRI$_ADV_REC_ACTIONS
WRH$_DLM_MISC_BL WRI$_ADV_SQLA_FAKE_REG
STATS$ENQUEUE_STAT WRH$_ENQUEUE_STAT WRI$_ADV_SQLA_MAP
WRH$_EVENT_NAME WRI$_ADV_SQLA_STMTS
WRH$_FILEMETRIC_HISTORY WRI$_ADV_SQLA_TMP
STATS$FILESTATXS WRH$_FILESTATXS WRI$_ADV_SQLT_BINDS
WRH$_FILESTATXS_BL WRI$_ADV_SQLT_PLANS
STATS$INSTANCE_RECOVERY WRH$_INSTANCE_RECOVERY WRI$_ADV_SQLT_RTN_PLAN
WRH$_INST_CACHE_TRANSFER WRI$_ADV_SQLT_STATISTICS
WRH$_INST_CACHE_TRANSFER_BL WRI$_ADV_SQLW_COLVOL
WRH$_JAVA_POOL_ADVICE WRI$_ADV_SQLW_STMTS
STATS$LATCH WRH$_LATCH WRI$_ADV_SQLW_SUM
WRH$_LATCH_BL WRI$_ADV_SQLW_TABLES
STATS$LATCH_CHILDREN WRH$_LATCH_CHILDREN WRI$_ADV_SQLW_TABVOL
WRH$_LATCH_CHILDREN_BL WRI$_ADV_TASKS
STATS$LATCH_MISSES_SUMMARY WRH$_LATCH_MISSES_SUMMARY WRI$_ADV_USAGE
WRH$_LATCH_MISSES_SUMMARY_BL WRI$_AGGREGATION_ENABLED
WRH$_LATCH_NAME WRI$_ALERT_HISTORY
STATS$LATCH_PARENT WRH$_LATCH_PARENT WRI$_ALERT_OUTSTANDING
WRH$_LATCH_PARENT_BL WRI$_ALERT_THRESHOLD
STATS$LIBRARYCACHE WRH$_LIBRARYCACHE WRI$_ALERT_THRESHOLD_LOG
WRH$_LOG WRI$_DBU_CPU_USAGE
WRH$_METRIC_NAME WRI$_DBU_CPU_USAGE_SAMPLE
WRH$_MTTR_TARGET_ADVICE WRI$_DBU_FEATURE_METADATA
WRH$_OPTIMIZER_ENV WRI$_DBU_FEATURE_USAGE
WRH$_OSSTAT WRI$_DBU_HIGH_WATER_MARK
WRH$_OSSTAT_BL WRI$_DBU_HWM_METADATA
WRH$_OSSTAT_NAME WRI$_DBU_USAGE_SAMPLE
STATS$PARAMETER WRH$_PARAMETER WRI$_OPTSTAT_AUX_HISTORY
WRH$_PARAMETER_BL WRI$_OPTSTAT_HISTGRM_HISTORY
WRH$_PARAMETER_NAME WRI$_OPTSTAT_HISTHEAD_HISTORY
STATS$PGASTAT WRH$_PGASTAT WRI$_OPTSTAT_IND_HISTORY
STATS$PGA_TARGET_ADVICE WRH$_PGA_TARGET_ADVICE WRI$_OPTSTAT_OPR
WRH$_PROCESS_MEMORY_SUMMARY WRI$_OPTSTAT_TAB_HISTORY
STATS$RESOURCE_LIMIT WRH$_RESOURCE_LIMIT WRI$_SCH_CONTROL
STATS$ROWCACHE_SUMMARY WRH$_ROWCACHE_SUMMARY WRI$_SCH_VOTES
WRH$_ROWCACHE_SUMMARY_BL WRI$_SEGADV_CNTRLTAB
WRH$_RULE_SET WRI$_SEGADV_OBJLIST
WRH$_SEG_STAT WRI$_SQLSET_BINDS
WRH$_SEG_STAT_BL WRI$_SQLSET_DEFINITIONS
STATS$SEG_STAT_OBJ WRH$_SEG_STAT_OBJ WRI$_SQLSET_MASK
WRH$_SERVICE_NAME WRI$_SQLSET_PLANS
WRH$_SERVICE_STAT WRI$_SQLSET_PLANS_TOCAP
WRH$_SERVICE_STAT_BL WRI$_SQLSET_PLAN_LINES
WRH$_SERVICE_WAIT_CLASS WRI$_SQLSET_REFERENCES
WRH$_SERVICE_WAIT_CLASS_BL WRI$_SQLSET_STATEMENTS
WRH$_SESSMETRIC_HISTORY WRI$_SQLSET_STATISTICS
STATS$SESSTAT WRH$_SESS_TIME_STATS WRI$_SQLSET_WORKSPACE
STATS$SGA WRH$_SGA WRI$_TRACING_ENABLED
STATS$SEG_STAT , STATS$SGASTAT WRH$_SGASTAT
WRH$_SGASTAT_BL
WRH$_SGA_TARGET_ADVICE
STATS$SHARED_POOL_ADVICE WRH$_SHARED_POOL_ADVICE
STATS$SQL_STATISTICS WRH$_SQLSTAT
WRH$_SQLSTAT_BL
STATS$SQLTEXT WRH$_SQLTEXT
WRH$_SQL_BIND_METADATA
STATS$SQL_PLAN WRH$_SQL_PLAN
WRH$_SQL_SUMMARY
STATS$SQL_WORKAREA_HISTOGRAM WRH$_SQL_WORKAREA_HISTOGRAM
WRH$_STAT_NAME
WRH$_STREAMS_APPLY_SUM
WRH$_STREAMS_CAPTURE
WRH$_STREAMS_POOL_ADVICE
WRH$_SYSMETRIC_HISTORY
WRH$_SYSMETRIC_SUMMARY
STATS$SYSSTAT WRH$_SYSSTAT
WRH$_SYSSTAT_BL
STATS$SYSTEM_EVENT WRH$_SYSTEM_EVENT
WRH$_SYSTEM_EVENT_BL
WRH$_SYS_TIME_MODEL
WRH$_SYS_TIME_MODEL_BL
WRH$_TABLESPACE_SPACE_USAGE
WRH$_TABLESPACE_STAT
WRH$_TABLESPACE_STAT_BL
WRH$_TEMPFILE
STATS$TEMPSTATXS WRH$_TEMPSTATXS
WRH$_THREAD
STATS$UNDOSTAT WRH$_UNDOSTAT
WRH$_WAITCLASSMETRIC_HISTORY
STATS$WAITSTAT WRH$_WAITSTAT
WRH$_WAITSTAT_BL

Display partition information for a specified index

Posted by Sagar Patil

Displays partition information for the specified index, or all indexes.

SET LINESIZE 500
SET PAGESIZE 1000
SET FEEDBACK OFF
SET VERIFY OFFSELECT a.index_name,
a.partition_name,
a.tablespace_name,
a.initial_extent,
a.next_extent,
a.pct_increase,
a.num_rows
FROM   dba_ind_partitions a
WHERE  a.index_name  = Decode(‘BRANCH_STATS_IX’,’ALL’,a.index_name,’BRANCH_STATS_IX’)
ORDER BY a.index_name, a.partition_name;

Replace ‘BRANCH_STATS_IX’ with valid index_name

select ‘Alter Index ‘|| index_owner ||’.’||index_name
||’ Rebuild Partition ‘ || partition_name ||’  Online;
‘ from dba_ind_partitions a
WHERE  a.index_name  = Decode(‘BRANCH_STATS_IX’,’ALL’,a.index_name,’BRANCH_STATS_IX’)
ORDER BY a.index_name, a.partition_name

This should create script like …
Alter Index STATS.BRANCH_STATS_IX Rebuild Partition BS_20060201 Online;
Alter Index STATS.BRANCH_STATS_IX Rebuild Partition BS_20060202 Online;
Alter Index STATS.BRANCH_STATS_IX Rebuild Partition BS_20060203 Online;
Alter Index STATS.BRANCH_STATS_IX Rebuild Partition BS_20060204 Online;

Problem : Oracle reported block corruption on data file id 21,22 mapped for ABC,XYZ tablespaces.

Get list of all partitions mapped into corrupt tablespaces

SELECT a.index_name,
a.partition_name,
a.tablespace_name,
a.initial_extent,
a.next_extent,
a.pct_increase,
a.num_rows
FROM dba_ind_partitions a
WHERE a.tablespace_name in (‘ABC’,’XYZ’)
ORDER BY a.index_name, a.partition_name

Get a script to rebuild all these partitions

SELECT ‘Alter Index ‘|| index_owner ||’.’||index_name ||’ Rebuild Partition ‘ || partition_name ||’ Online; ‘
FROM dba_ind_partitions a
WHERE a.tablespace_name in (‘ABC,’XYZ)
ORDER BY a.index_name, a.partition_name;

Undo Retention , Undo Optimization

Posted by Sagar Patil

undo_retention parameter will not guarantee a RETENTION unless you define a RETENTION GUARANTEE CLAUSE on tablespace level

Actual Undo Size

SELECT   SUM (a.bytes) "UNDO_SIZE"
 FROM   v$datafile a, v$tablespace b, dba_tablespaces c
 WHERE       c.contents = 'UNDO'
 AND c.status = 'ONLINE'
 AND b.name = c.tablespace_name
 AND a.ts# = b.ts#;

UNDO_SIZE
———-
209715200

Undo Blocks per Second

SELECT   MAX (undoblks / ( (end_time - begin_time) * 3600 * 24))
 "UNDO_BLOCK_PER_SEC"
 FROM   v$undostat;

UNDO_BLOCK_PER_SEC
——————
3.12166667

DB Block Size

SELECT   TO_NUMBER (VALUE) "DB_BLOCK_SIZE [KByte]"
 FROM   v$parameter
 WHERE   name = 'db_block_size';

DB_BLOCK_SIZE [Byte]
——————–
4096

Optimal Undo Retention

209’715’200 / (3.12166667 * 4’096) = 16’401 [Sec]

Using Inline Views, you can do all in one query!

SELECT   d.undo_size / (1024 * 1024) "ACTUAL UNDO SIZE [MByte]",
 SUBSTR (e.VALUE, 1, 25) "UNDO RETENTION [Sec]",
 ROUND (
 (d.undo_size / (TO_NUMBER (f.VALUE) * g.undo_block_per_sec))
 )
 "OPTIMAL UNDO RETENTION [Sec]"
 FROM   (SELECT   SUM (a.bytes) undo_size
 FROM   v$datafile a, v$tablespace b, dba_tablespaces c
 WHERE       c.contents = 'UNDO'
 AND c.status = 'ONLINE'
 AND b.name = c.tablespace_name
 AND a.ts# = b.ts#) d,
 v$parameter e,
 v$parameter f,
 (SELECT   MAX (undoblks / ( (end_time - begin_time) * 3600 * 24))
 undo_block_per_sec
 FROM   v$undostat) g
 WHERE   e.name = 'undo_retention' AND f.name = 'db_block_size';

ACTUAL UNDO SIZE [MByte]
————————
200

UNDO RETENTION [Sec]
——————–
10800

OPTIMAL UNDO RETENTION [Sec]
—————————-
16401

Calculate Needed UNDO Size for given Database Activity

SELECT   d.undo_size / (1024 * 1024) "ACTUAL UNDO SIZE [MByte]",
 SUBSTR (e.VALUE, 1, 25) "UNDO RETENTION [Sec]",
 (TO_NUMBER (e.VALUE) * TO_NUMBER (f.VALUE) * g.undo_block_per_sec)
 / (1024 * 1024)
 "NEEDED UNDO SIZE [MByte]"
 FROM   (SELECT   SUM (a.bytes) undo_size
 FROM   v$datafile a, v$tablespace b, dba_tablespaces c
 WHERE       c.contents = 'UNDO'
 AND c.status = 'ONLINE'
 AND b.name = c.tablespace_name
 AND a.ts# = b.ts#) d,
 v$parameter e,
 v$parameter f,
 (SELECT   MAX (undoblks / ( (end_time - begin_time) * 3600 * 24))
 undo_block_per_sec
 FROM   v$undostat) g
 WHERE   e.name = 'undo_retention' AND f.name = 'db_block_size';

ACTUAL UNDO SIZE [MByte]
————————
200
UNDO RETENTION [Sec]
——————–
10800
NEEDED UNDO SIZE [MByte]
————————
131.695313

Do I need to reorganise my table?

Posted by Sagar Patil

It’s a tricky question. I have explained here by using a 33GB SAP BW driving table called VBAP which needed a major work. At end I had to rebuild this table using Quest Shareplex.

VBAP Table acquired 3,873,549 blocks = 30.2 GB in Space , The total number of rows were 15,900,000
So Rows per block = 15,900,000/3,873,549 blocks= 4 rows per block

The oracle analyze stats show Avg row length of 1256 bytes so for a 8KB block size

Ideal block count would be = 15,900,000/6 rows per block(avg row len 1256 bytes) = 2,650,000 Blocks  not 3,873,549 blocks
Oracle currently using 45% more space due to row chaining.

I have used SQL script below to locate number of blocks for a 16 million row table. The row dist count below indicate on avg we have chained rows going down upto 3 database blocks.

select rows_per_block,

count(*) blocks,
sum(rows_per_block) sum_rows
from
(
select dbms_rowid.rowid_block_number(rowid),
count(*) rows_per_block
from SAPR3.vbap
group by dbms_rowid.rowid_block_number(rowid)
)
group by rows_per_block
order by 1 desc;

select to_char(floor(rows_per_block/10)*10,’fm990′)||”’s’ rows_per_block,
count(*) blocks,
sum(rows_per_block) sum_rows
from
(
select dbms_rowid.rowid_block_number(rowid),
count(*) rows_per_block
from t1
group by dbms_rowid.rowid_block_number(rowid)
)
group by to_char(floor(rows_per_block/10)*10,’fm990′)||”’s’
order by 1 desc;

ROWS_PER_BLOCK BLOCKS SUM_ROWS
53         1     53
51         4     204
50         3     150
49         6     294
48         5     240
47         19     893
46         35     1610
45         52     2340
44         77     3388
43         143     6149
42         181     7602
41         272     11152
40         387     15480
39         606     23634
38         814     30932
37         1119     41403
36         1475     53100
35         1985     69475
34         2627     89318
33         3472     114576
32         4262     136384
31         5299     164269
30         6662     199860
29         8107     235103

Why 1… 50 rows allocated in a single block?

It must be due to bespoke development done on table. I feel when table was originally populated , it had very less data & SAP-Oracle put those many rows into a single block but over period of time things have changed and new columns for bespoke development and data pushed row chaining on table. In short I can see a need for rebuilding this table.

% of Rows retrieved thru Row Chaining
http://www.akadia.com/services/ora_chained_rows.html

SELECT name,value FROM v$sysstat WHERE name like ‘%table%fetch%ro%’;

NAME VALUE
——————————————————————————-
table fetch by rowid 68617756592
table fetch continued row 4330753

Table fetch by rowid : Number of ROWS retrieved thru Direct ROWID
table fetch continued row : Number of ROWS going thru ROW CHAINING

Flashback Query

Posted by Sagar Patil

Oracle Flashback Query allows users to see a consistent view of the database as it was at a point in the past. This functionality allows comparative reporting over time and recovery from logical corruptions.

Prerequisites

Oracle Flashback Query can only be used if the server is configured to use Automatic Undo Management, rather than traditional rollback segments. The maximum time period that can be flashbacked to is defined using the UNDO_RETENTION parameter in the init.ora file. Alternatively, this parameter can be set using:

ALTER SYSTEM SET UNDO_RETENTION = <seconds>;

Using Flashback Query

Flashback Query is enabled and disabled using the DBMS_FLASHBACK package. The point in time of the flashback can be specified using the SCN or the actual time:

EXECUTE Dbms_Flashback.Enable_At_System_Change_Number(123);
EXECUTE Dbms_Flashback.Enable_At_Time(’28-AUG-01 11:00:00′);

Once you’ve finished performing all your read-only operations you can turn off flashback query using:

EXECUTE Dbms_Flashback.Disable;

The flashback query is turned off when the session ends if there is no explicit call to the disable procedure. The current system change number can be returned using the Get_System_Change_Number function.
Example Data Recovery
Assuming all data was accidentally deleted from the EMPLOYEES table at 9:05AM we could recover it using:

DECLARE
CURSOR c_emp IS
SELECT *
FROM employees;
v_row c_emp%ROWTYPE;
BEGIN
Dbms_Flashback.Enable_At_Time('28-AUG-01 09:00:00');
OPEN c_emp;
Dbms_Flashback.Disable;

LOOP
FETCH c_emp INTO v_row;
EXIT WHEN c_emp%NOTFOUND;
INSERT INTO employees VALUES
(v_row.employee_id, v_row.first_name,
v_row.last_name, v_row.email,
v_row.phone_number, v_row.hire_date,
v_row.job_id, v_row.salary,
v_row.commission_pct, v_row.manager_id,
v_row.department_id, v_row.dn);
END LOOP;
CLOSE c_emp;
COMMIT;
END;
/

Notice that the Flashback Query session is disabled after the cursor is created so that the DML operations can be performed to recover the data.

Restrictions

* The server must be configured to use Automatic Undo Management.
* No DDL or DML can be issued when using Flashback Query.
* Flashback Query does not reverse DDL operations such as DROP, only DML alterations to the data.
* Flashback Query does apply to code objects (Packages, Procedures, Function or Triggers). If invoked, the current definition will be executed against the flashback data.

Function Based Indexes

Posted by Sagar Patil

Traditionally, performing a function on an indexed column in the where clause of a query guaranteed an index would not be used. Oracle 8i introduced Function Based Indexes to counter this problem. Rather than indexing a column, you index the function on that column, storing the product of the function, not the original column data. When a query is passed to the server that could benefit from that index, the query is rewritten to allow the index to be used. The following code samples give an example of the use of Function Based Indexes:

* Build Test Table
* Build Regular Index
* Build Function Based Index

Build Test Table
First we build a test table and populate it with enough data so that use of an index would be advantageous:

Code:
CREATE TABLE user_data (
id          NUMBER(10)    NOT NULL,
first_name  VARCHAR2(40)  NOT NULL,
last_name   VARCHAR2(40)  NOT NULL);

BEGIN
  FOR cur_rec IN 1 .. 2000 LOOP
    IF MOD(cur_rec, 2) = 0 THEN
      INSERT INTO user_data
      VALUES (cur_rec, ‘John’ || cur_rec, ‘Doe’);
    ELSE
      INSERT INTO user_data
      VALUES (cur_rec, ‘Jayne’ || cur_rec, ‘Doe’);
    END IF;
    COMMIT;
  END LOOP;
END;
/

ANALYZE TABLE user_data COMPUTE STATISTICS;

At this point the table is not indexed so we would expect a full table scan for any query:

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;

Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)
[b]   1    0   TABLE ACCESS (FULL) OF ‘USER_DATA’ (Cost=1 Card=1 Bytes=10)[/b]

Build Regular Index
If we now create a regular index on the FIRST_NAME column we see that the index is not used:

CREATE INDEX first_name_idx ON user_data (first_name);
ANALYZE TABLE user_data COMPUTE STATISTICS;

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;

Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)
   1    0   TABLE ACCESS (FULL) OF ‘USER_DATA’ (Cost=1 Card=1 Bytes=10)

Build Function Based Index
If we now replace the regular index with a function based index on the FIRST_NAME column we see that the index is used:

DROP INDEX first_name_idx;
CREATE INDEX first_name_idx ON user_data (UPPER(first_name));
ANALYZE TABLE user_data COMPUTE STATISTICS;

ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;
ALTER SESSION SET QUERY_REWRITE_ENABLED = TRUE;

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;
   
Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=14)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF ‘USER_DATA’ (Cost=2 Card=1 Bytes=14)

[b]   2    1     INDEX (RANGE SCAN) OF ‘FIRST_NAME_IDX’ (NON-UNIQUE) (Cost=1 Card=1)[/b]

The QUERY_REWRITE_INTEGRITY and QUERY_REWRITE_ENABLED parameters must be set or the server will not be able to rewrite the queries, and will therefore not be able to use the new index. These parameters may be set in the Init.ora file if they are needed permanently.

When to Rebuild a B-Tree Index

Posted by Sagar Patil

You have to periodically check your indexes to see if they become skewed and, therefore, good candidates for rebuild.

A skewed index has many records clumped close together on the index tree due to their similar indexed values. When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur, creating a bottleneck in performance. It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt.

Here is a sample procedure on how to identify the skewed indexes:

1. Gather statistics on your indexes. For large indexes (over one hundred
thousand records in the underlying table), use ESTIMATE instead of COMPUTE STATISTICS.

For example:   SQL> analyze index A1_PK compute statistics;
Index analyzed.

2. Run the script given below – index_check.sql – to find out how skewed each index is.

This query checks on all indexes that belong to user SCOTT:

SQL> select index_name, blevel,
decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',
2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
from dba_indexes
where owner='SCOTT';

INDEX_NAME                                BLEVEL  OK
----------------------------------------  ------  ---- 
A1_PK                                     BLEVEL  HIGH
A1_UK                                     BLEVEL  HIGH
BUDVERPORT_BUDVERPORT2_UK                 1       OK BLEVEL
BUDVERPORT_BV_FK_I                        2       OK BLEVEL
BUDVERPORT_CHAR_CL_FK_I                   1       OK BLEVEL
BUDVERPORT_DIRCTE_FK_I                    3       OK BLEVEL
BUDVERPORT_FUND_TYPE_FK_I                 1       OK BLEVEL
BUDVERPORT_OMB_SUBFCT_FK_I                1       OK BLEVEL
BUDVERPORT_ORG_FK_I                       0       OK BLEVEL
BUDVERPORT_PL_TITLE_FK_I                  1       OK BLEVEL
BUDVERPORT_RDC_FK_I                       1       OK BLEVEL
S_INVENTORY_PRODID_WARID_PK               BLEVEL  HIGH
S_ITEM_ORDID_ITEMID_PK                    BLEVEL  HIGH
S_ITEM_ORDID_PRODID_UK                    BLEVEL  HIGH
S_LONGTEXT_ID_PK                          BLEVEL  HIGH
S_ORD_ID_PK                               BLEVEL  HIGH
S_PRODUCT_ID_PK                           BLEVEL  HIGH
S_PRODUCT_NAME_UK                         BLEVEL  HIGH
S_REGION_ID_PK                            BLEVEL  HIGH
S_REGION_NAME_UK                          BLEVEL  HIGH
S_TITLE_TITLE_PK                          BLEVEL  HIGH
S_WAREHOUSE_ID_PK                         BLEVEL  HIGH

3. The BLEVEL (or branch level) is part of the B-tree index format and relates to the number of times Oracle has to narrow its search on the index while searching for a particular record. In some cases, a separate disk hit is requested for each BLEVEL. Prior to 9i, if the BLEVEL is more than 4, it is recommended to rebuild the index. As database are getting bigger and bigger, BLEVEL may not be a good indicator of time to rebuild the index. BLEVEL > 4 may suggest an evaluation of whether the partitioning option could help you.

Note: If you do not analyze the index, the index_check.sql script will
show “BLEVEL HIGH” for such an index.

4. Gather more index statistics using the VALIDATE STRUCTURE option of the ANALYZE command to populate the INDEX_STATS virtual table. This table does not contain an OWNER column and assumes you are looking for statistics for indexes created by your active session only.

SQL> analyze index SCOTT.ORG_PK validate structure;
Index analyzed.

SQL> select DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,
(LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
from index_stats  where NAME='&index_name';

Enter value for index_name: ORG_PK

PCT_DELETED DISTINCTIVENESS
———– —————
0 0

The PCT_DELETED column shows what percent of leaf entries (index entries) have been deleted and remain unfilled. The more deleted entries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED is 20% or higher, the index is candidate for rebuilding. If you can afford to rebuild indexes more frequently, then do so if the value is higher than 10%. Leaving indexes with high PCT_DELETED without rebuild might cause excessive redo allocation on some systems.
The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average.

For example, if a table has 10000 records and 9000 distinct SSN values, the formula would result in
(10000-9000) x 100 / 10000 = 10. This shows a good distribution of values.

If, however, the table has 10000 records and only 2 distinct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98. This shows that there are very few distinct values as a percentage of total records in the column. Such columns are not candidates for a rebuild but good candidates for bitmapped indexes.

Index Monitoring

Posted by Sagar Patil

Index monitoring could be initiated and stopped using ALTER INDEX syntax shown below.

ALTER INDEX my_index_i MONITORING USAGE;
ALTER INDEX my_index_i NOMONITORING USAGE;

Information about the index usage can be displayed using the V$OBJECT_USAGE view.

SELECT index_name,
table_name,
monitoring,
used,
start_monitoring,
end_monitoring
FROM v$object_usage
WHERE index_name = 'MY_INDEX_I'
ORDER BY index_name; 

The V$OBJECT_USAGE view does not contain an OWNER column so you must to log on as the object owner to see the usage data.

Foreign Key Indexes
To quote the Oracle Database Concepts manual:
“You should almost always index foreign keys. The only exception is when the matching unique or primary key is never updated or deleted.”

When a foreign key is unindexed, DML on the parent primary key results in a share row exclusive table lock (or share-subexclusive table lock, SSX) on the child table, preventing DML from other transactions against the child table. If the DML affects several rows in the parent table, the lock on the child table is obtained and released immediately for each row in turn. Despite the speed of the lock-release process, this can cause significant amounts of contention on the child table during periods of heavy update/delete activity on the parent table.

When a foreign key is indexed, DML on the parent primary key results in a row share table lock (or subshare table lock, SS) on the child table. This type of lock prevents other transactions from issuing whole table locks on the child table, but does not block DML on either the parent or the child table. Only the rows relating to the parent primary key are locked in the child table.

Building virtual index using the NOSEGMENT clause.

Posted by Sagar Patil

Let’s create a virtual index

07:59:12 orcl> create index hr.emp2_emp_id_virtual on hr.employees2(employee_id) nosegment;
Index created.

I am going to set a hidden startup parameter “_use_nosegment_indexes” to TRUE so that our session will recognize our new virtual index.

08:00:09 orcl> alter session set “_use_nosegment_indexes” = true;
Running our statement again to see if it will use our new virtual index. Check out the access path below. The optimizer has chosen our virtual index.

select employee_id, a.department_id, b.department_name from
hr.departments b, hr.employees2 a where
a.department_id = b.department_id
and employee_id = 203

Execution Plan
———————————————————-
Plan hash value: 2516110069
—————————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 25 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 25 | 3 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES2 | 1 | 9 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | EMP2_EMP_ID_VIRTUAL | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 |
—————————————————————————————————-

If I set the “_use_nosegment_indexes” to FALSE, the optimizer did NOT choose virtual index.

08:01:09 orcl> alter session set “_use_nosegment_indexes” = false;
Session altered.

08:01:33 orcl> select employee_id, a.department_id, b.department_name
08:01:47 2 from
08:01:47 3 hr.departments b, hr.employees2 a
08:01:47 4 where
08:01:47 5 a.department_id = b.department_id
08:01:47 6 and employee_id = 203;

Execution Plan
———————————————————-
Plan hash value: 2641883601
——————————————————————————————–

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 25 | 818 (3)| 00:00:10 |
| 1 | NESTED LOOPS | | 1 | 25 | 818 (3)| 00:00:10 |
|* 2 | TABLE ACCESS FULL | EMPLOYEES2 | 1 | 9 | 817 (3)| 00:00:10 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 |
——————————————————————————————–

Executing DBMS_STATS to gather statistics on both the virtual and standard index. I have run tests with statistics and without and it does seem to affect virtual index access paths.

08:21:57 orcl> exec dbms_stats.gather_index_stats(‘HR’, ‘EMP2_EMP_ID_NON_VIRTUAL’);
PL/SQL procedure successfully completed.

08:23:10 orcl> exec dbms_stats.gather_index_stats(‘HR’, ‘EMP2_EMP_ID_VIRTUAL’);

PL/SQL procedure successfully completed.

Looking for information on indexes built on the EMPLOYEES2 table.
Oracle returns a row for the standard index but not the virtual index.

08:20:31 orcl> select index_name, last_analyzed from dba_indexes where table_name = ‘EMPLOYEES2’

INDEX_NAME LAST_ANAL
—————————— ———
EMP2_EMP_ID_NON_VIRTUAL 31-MAY-07

Determining f we can find the virtual index in DBA_SEGMENTS. No success.

08:26:09 orcl> select segment_name, segment_type from dba_segments where segment_name like ‘EMP2%’;
SEGMENT_NAME SEGMENT_TYPE
——————– ——————
EMP2_EMP_ID_NON_VIRT INDEX
UAL

Looking for the the virtual index in DBA_OBJECTS. Finally, we find some sort of evidence that the virtual index exists in the database!

08:30:21 orcl> col object_name for a30
08:30:29 orcl> r
1 select object_name, object_type, created, status, temporary
2* from dba_objects where object_name like ‘EMP2%’

OBJECT_NAME OBJECT_TYPE CREATED STATUS T
—————————— ——————- ——— ——- –
EMP2_EMP_ID_NON_VIRTUAL INDEX 31-MAY-07 VALID N
EMP2_EMP_ID_VIRTUAL INDEX 31-MAY-07 VALID N

Selecting the Best Join Order : Methods Oracle Uses to Join Tables

Posted by Sagar Patil

Choose the join order that gives you least number of rows to be joined to the other tables. That is, if you’re joining three tables, the one with the more restrictive filter should be joined first to one of the other two tables.

When SQL statement includes a join between two or more tables, the order in which you join tables is extremely important.
– The driving table in a join is the first table that comes after the WHERE clause.
– The driving table in the join should contain the filter that will eliminate the most rows.

Nested loops Join
Sort-merge Join
Hash Join
Cluster Join

Nested Loops Joins
For each relevant row in the first table, find all matching rows in the other table.
Suppose you have a list of all public officials in your city and a telephone book for your city. How would you go about getting the names and telephone numbers of each council member?

You could:
– Go down the list of public officials
– Discard all names except council members
– Look up the remaining names in the telephone book one at a time

Sort-Merge Joins
Sort all of the relevant rows in the first table by the join key, likewise sort the second table, and then merge the two together.

Suppose you and your fianc?e each have a huge compact disc collection. You buy a house together and decide to combine collections. How would you eliminate the duplicate CDs?

You could:
– Alphabetize each collection by artist
– Compare collections side-by-side
– Extract the discs that appear in both

Hash Join
Build a hash table in memory of all relevant rows from the first table. For each relevant row in the second table, probe the hash table to find the matches.

A hash join is like a nested loops join, except that preparatory work is done to one of the two tables to speed up the join process.

Consider the nested loops join example with the council members?
phone numbers, and imagine that the telephone book is not
alphabetized by name.

Cluster Joins
For each relevant row in the first table, find all matching rows in the other table by looking at the cluster index. The matching rows will probably be conveniently located in the same data block.

Consider a stack of purchase orders.
Each PO can have any number of items on it. When you pick up one PO, you have easy access to all of its items.

 

Sid from Unix Process ID

Posted by Sagar Patil

Here is a situation, This afternoon I have seen my CPU hitting through the roof on a Live system. There are about 6 Oracle instances and one is chewing more than 90% of CPU time.

– I want to know which Instance /process is a culprit

  PID USERNAME PRI NICE  SIZE   RES STATE   TIME    CPU COMMAND
638523 oracle    52    0 1360M  503M run   574:22 99.10% oracle
958998 oracle    42    0 1363M  506M run   518:12 46.20% oracle
982071 oracle    42    0 1360M  502M WAIT  618:58 38.40% oracle

 

select s.username, s.status,  s.sid,     s.serial#,
       p.spid,     s.machine, s.process, s.lockwait
from   v$session s, v$process p
where  p.spid     = &oracle_pid
and    s.paddr    = p.addr;

Replace ‘&unix_pid’  with 958998

USERNAME    STATUS    SID    SERIAL#    SPID    MACHINE    PROCESS    LOCKWAIT
SYS         ACTIVE    241    38945      958998   

Find out SGA PGA Size

Posted by Sagar Patil

SET LINESIZE 145
SET PAGESIZE 9999

COLUMN sid                     FORMAT 999            HEADING ‘SID’
COLUMN oracle_username         FORMAT a12            HEADING ‘Oracle User’     JUSTIFY right
COLUMN os_username             FORMAT a9             HEADING ‘O/S User’        JUSTIFY right
COLUMN session_program         FORMAT a18            HEADING ‘Session Program’ TRUNC
COLUMN session_machine         FORMAT a8             HEADING ‘Machine’         JUSTIFY right TRUNC
COLUMN session_pga_memory      FORMAT 9,999,999,999  HEADING ‘PGA Memory’
COLUMN session_pga_memory_max  FORMAT 9,999,999,999  HEADING ‘PGA Memory Max’
COLUMN session_uga_memory      FORMAT 9,999,999,999  HEADING ‘UGA Memory’
COLUMN session_uga_memory_max  FORMAT 9,999,999,999  HEADING ‘UGA Memory MAX’

SELECT
s.sid                sid
, lpad(s.username,12)  oracle_username
, lpad(s.osuser,9)     os_username
, s.program            session_program
, lpad(s.machine,8)    session_machine
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = ‘session pga memory’)        session_pga_memory
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = ‘session pga memory max’)    session_pga_memory_max
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = ‘session uga memory’)        session_uga_memory
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = ‘session uga memory max’)    session_uga_memory_max
FROM
v$session  s
ORDER BY session_pga_memory DESC
/

SID  Oracle User  O/S User Session Program     Machine     PGA Memory PGA Memory Max     UGA Memory UGA Memory MAX
—- ———— ——— —————— ——– ————– ————– ————– ————–
14          SYS    oracle sqlplus@bmau32.uk. bmau32.u     14,040,528     14,040,528        209,312        209,312
6                 oracle oracle@bmau32.uk.p bmau32.u     10,670,592     10,670,592         78,496         78,496
5                 oracle oracle@bmau32.uk.p bmau32.u     10,378,816     10,378,816         78,496         78,496
31          SYS                                            5,504,568     26,138,992      4,856,512     24,737,312
13                 oracle oracle@bmau32.uk.p bmau32.u      4,660,064      4,660,064         78,496         78,496
12                 oracle oracle@bmau32.uk.p bmau32.u      4,618,136      4,618,136         78,496         78,496

Statspack Scripts

Posted by Sagar Patil

Installation
The following scripts must be run as a user with SYSDBA privilege:
* SPCREATE.SQL: Creates entire Statspack environment (calls SPCUSR.SQL, SPCTAB.SQL, SPCPKG.SQL) * SPDROP.SQL: Drops entire Statspack environment (calls SPDTAB.SQL, SPDUSR.SQL)

The following scripts are run as a user with SYSDBA privilege by the preceding calling scripts:
* SPDTAB.SQL: Drops Statspack tables
* SPDUSR.SQL: Drops the Statspack user (PERFSTAT)

The following scripts are run as PERFSTAT by the preceding calling scripts:
* SPCUSR.SQL: Creates the Statspack user (PERFSTAT)
* SPCTAB.SQL: Creates Statspack tables
* SPCPKG.SQL: Creates the Statspack package

Scripts for Statspack Reporting and Automation

The following scripts must be run as PERFSTAT:
* SPREPORT.SQL: Generates a Statspack report
* SPREPSQL.SQL: Generates a Statspack SQL report for the specific SQL hash value specified
* SPREPINS.SQL: Generates a Statspack report for the database and instance specified
* SPAUTO.SQL: Automates Statspack statistics collection (using DBMS_JOB)

Scripts for Statspack Performance Data Maintenance
The following scripts must be run as PERFSTAT:
* SPPURGE.SQL: Purges a limited range of Snapshot IDs for a given database instance.
* SPTRUNC.SQL: Truncates all performance data in Statspack tables

Tuning SQL to drop execution cost

Posted by Sagar Patil
SELECT n.MSISDN,
(SELECT ptc2.PRIMARY_ACCOUNT_NUMBER
FROM p_topup_cards ptc2
WHERE ptc2.NUMR_MSISDN = n.MSISDN
--AND ptc2.CARD_TYPE = 1
AND ptc2.PRIMARY_ACCOUNT_NUMBER LIKE '894428%'
AND ROWNUM < 2) pan
FROM numbers n  ,p_number_history pnh
WHERE n.MSISDN = pnh.NUMR_MSISDN
AND n.STATUS = 'A'
AND n.Barred = 'N'
AND n.spid_spid = '416'
--AND n.first_transaction IS NOT NULL
--AND pnh.END_TIMESTAMP IS NULL
AND pnh.PLFM_PLTP_CODE = 'IN'
AND ROWNUM <= 2000
Plan
SELECT STATEMENT  FIRST_ROWS Cost: 758,319 Bytes: 72,000  Cardinality: 2,000
3 COUNT STOPKEY
2 TABLE ACCESS BY INDEX ROWID LIVEUSER.PREPAY_TOPUP_CARDS Cost: 6  Bytes: 32  Cardinality: 1
1 INDEX RANGE SCAN UNIQUE LIVEUSER.PPTC_PK Cost: 4  Cardinality: 1
8 COUNT STOPKEY
7 NESTED LOOPS  Cost: 758,319 Bytes: 8,591,616  Cardinality: 238,656
5 TABLE ACCESS BY INDEX ROWID LIVEUSER.NUMBERS Cost: 46,110  Bytes: 4,748,060  Cardinality: 237,403
4 INDEX RANGE SCAN NON-UNIQUE LIVEUSER.NUMR_SPID_FK_I Cost: 3,682  Cardinality: 949,610
6 INDEX RANGE SCAN UNIQUE LIVEUSER.PFM_NUM_HS_PK Cost: 3  Bytes: 16  Cardinality: 1

Initial Analysis
This SQL needs to be re-written to avoid any join and mainly try and avoid statement “n.spid_spid = ‘416’”

Why?
This simple stmt “select * from numbers where spid_spid = ‘416’;” and it shows the cost of 46K+

Plan
SELECT STATEMENT FIRST_ROWS Cost: 46,110 Bytes: 40,833,230 Cardinality: 949,610
2 TABLE ACCESS BY INDEX ROWID CPI_SYSTEM.NUMBERS Cost: 46,110 Bytes: 40,833,230 Cardinality: 949,610
1 INDEX RANGE SCAN NON-UNIQUE CPI_SYSTEM.NUMR_SPID_FK_I Cost: 3,682 Cardinality: 949,610

How optimizer stats with Histograms can change execution Plan

Posted by Sagar Patil

Optimizer stats can play a key part in deciding execution plan. Here is an example

Table “RSODSACTREQ” has 313783 of total rows

Database with NO histograms :
For following 4 statements the SQL plan is always same i.e FULL TABLE SCAN

1. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPNPCAMP’
2. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZCUS_ACT’
3. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZTCIT_OH’
4. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPA0037A’

Execution Plan :
SELECT STATEMENT Optimizer Mode=CHOOSE 1 350
SORT AGGREGATE 1 16
TABLE ACCESS FULL SAPLIVE.RSODSACTREQ 2 K 39 K 350

Database with histograms:
1. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPNPCAMP’

Here Oracle can predict the resulting number of rows are lot more and prefers a full table scan

SELECT STATEMENT Optimizer Mode=CHOOSE 1 350
SORT AGGREGATE 1 16
TABLE ACCESS FULL SAPLIVE.RSODSACTREQ 262 K 3 M 350

2. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZCUS_ACT’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 19
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 93 1 K 19
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~STP 93 3

 

3. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZTCIT_OH’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 9
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 35 560 9
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~STP 35 3

4. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPA0037A’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 4
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 1 16 4
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~0 1 3

 

running now

AWR : How to locate resource limits

Posted by Sagar Patil

select A.snap_id AWR_SNAP_ID, A.INSTANCE_NUMBER,
to_char(B.BEGIN_INTERVAL_TIME,’DD-MON-YYYY HH24:MI:SS’) AWR_START_TIME,
to_char(B.END_INTERVAL_TIME,’DD-MON-YYYY HH24:MI:SS’) AWR_END_TIME,
A.RESOURCE_NAME, MAX_UTILIZATION
from sys.wrh$_resource_limit A, sys.wrm$_snapshot B
where A.resource_name like ‘%processes%’
and A.snap_id=b.snap_id
and A.INSTANCE_NUMBER= B.INSTANCE_NUMBER
and A.INSTANCE_NUMBER= 1
and B.BEGIN_INTERVAL_TIME > sysdate – 12/24;


The different resources recorded are as below :

RESOURCE_NAME
——————————
gcs_resources
processes
enqueue_locks
max_rollback_segments
parallel_max_servers
ges_procs
sessions
gcs_shadows

AWR_SNAP_ID INSTANCE_NUMBER AWR_START_TIME AWR_END_TIME RESOURCE_NAME MAX_UTILIZATION
7964 1 28/05/2010 16:00 28/05/2010 16:56 processes 146
7963 1 28/05/2010 15:00 28/05/2010 16:00 processes 146
7962 1 28/05/2010 14:00 28/05/2010 15:00 processes 146
7961 1 28/05/2010 13:00 28/05/2010 14:00 processes 146
7960 1 28/05/2010 12:00 28/05/2010 13:00 processes 146
7959 1 28/05/2010 11:00 28/05/2010 12:00 processes 146
7958 1 28/05/2010 10:00 28/05/2010 11:00 processes 146
7957 1 28/05/2010 09:00 28/05/2010 10:00 processes 146
7956 1 28/05/2010 08:00 28/05/2010 09:00 processes 146
7955 1 28/05/2010 07:00 28/05/2010 08:00 processes 146
7954 1 28/05/2010 06:00 28/05/2010 07:00 processes 146
7953 1 28/05/2010 05:00 28/05/2010 06:00 processes 146

Which sessions are consuming IO bandwidth, Would return SID list

Posted by Sagar Patil

select
sid,
username,
round(100 * total_user_io/total_io,2) tot_io_pct
from
(select
b.sid sid,
nvl(b.username,p.name) username,
sum(value) total_user_io
from
sys.v_$statname c,
sys.v_$sesstat a,
sys.v_$session b,
sys.v_$bgprocess p
where
a.statistic#=c.statistic# and
p.paddr (+) = b.paddr and
b.sid=a.sid and
c.name in (‘physical reads’,
‘physical writes’,
‘physical writes direct’,
‘physical reads direct’,
‘physical writes direct (lob)’,
‘physical reads direct (lob)’)
group by
b.sid, nvl(b.username,p.name)),
(select
sum(value) total_io
from
sys.v_$statname c,
sys.v_$sesstat a
where
a.statistic#=c.statistic# and
c.name in (‘physical reads’,
‘physical writes’,
‘physical writes direct’,
‘physical reads direct’,
‘physical writes direct (lob)’,
‘physical reads direct (lob)’))
order by
3 desc;

Analyze database for right statistics

Posted by Sagar Patil

Different DBAs have different views on % for analyze. The oracle documentation recommends to carry full analyze on entire database which is not possible for most live systems runnning into terabytes.
In past I had performance issue on my database just over 300 GB. There were one table ORDER_DATA with 400 million rows. That one table pulled entire system down number of times just because it wasn’t properly analyzed and oracle didn’t knew data distribution in the table.
I was struggling to understand where things are going wrong as we were analysing entire table every night but not with hash buckets-histograms and surprisingly in SQL execution it was using a right index.

After spending days & weeks investigating the issue, I reanalyzed it with new oracle API for histograms and SQL which used to take between 15-60 min started running at less than 100 milliseconds.

What to look for?
First check STATSPACK and find out the most active tables.
Analyse most active tables once a week with 10-15% sampling
For BIG tables start with 1% sampling and buld over period of time
I also observed adding parallel option in ANALYZE can reduce time taken significantly.

— Added to 9i init.ora
— parallel_automatic_tuning=true
— parallel_max_servers=16
— parallel_min_servers=4
— Changed percent to 1, all idx cols changed degree to 16 from 10
begin
dbms_stats.gather_table_stats(ownname=>’USER’,tabname =>’TABLE_NAME’,
estimate_percent => 1,method_opt=>’for all indexed columns’,
degree=>16,CASCADE=>TRUE);
end ;

begin
dbms_stats.gather_schema_stats(ownname=>’USER’,
estimate_percent => 20,method_opt=>’for all indexed columns’,
degree=>16,CASCADE=>TRUE);
end ;

Other Examples         

GATHER_DATABASE_STATS(estimate_percent,block_sample,method_opt,degree, granularity,cascade,stattab,statid, options,statown,gather_sys,no_invalidate,gather_temp,gather_fixed,stattype);
GATHER_INDEX_STATS (ownname,indname,partname,estimate_percent,stattab,statid,statown,degree,granularity,no_invalidate,stattype);
GENERATE_STATS(ownname,objname,organized);
GATHER_SYSTEM_STATS (gathering_mode,interval,stattab,statid,statown);
GATHER_TABLE_STATS (ownname,tabname,partname,estimate_percent,block_sample,method_opt,degree,granularity,cascade,stattab,statid,statown,no_invalidate,stattype);
GATHER_SCHEMA_STATS(ownname,estimate_percent,block_sample,method_opt,degree,granularity,cascade, stattab,statid,options,statown,no_invalidate,gather_temp,gather_fixed);

How to Backup/Export Oracle Optimizer Statistics into Table

Posted by Sagar Patil

Exporting and Importing Statistics

Caveat: Always use import/export and use imp/exp utility on schema user who owns tables.
I have wasted a week where I was exporting as DBA for XYZ user and then importing into
different system under different username.

Statistics can be exported and imported from the data dictionary to user-owned tables. This enables to create multiple versions of statistics for the same schema. One can also copy statistics from one database to another database.

You may want to copy statistics from a production database to a scaled-down test database to look at SQL execution plans.

Note:
Exporting and importing statistics is a distinct concept from the EXP and IMP utilities of the database.

The DBMS_STATS export and import packages do utilize IMP and EXP dump files.

Before exporting statistics, you first need to create a table for holding the statistics.

This statistics table is created using the procedure  DBMS_STATS.CREATE_STAT_TABLE.

After this table is created, then you can export statistics from the data dictionary into your statistics table using the DBMS_STATS.EXPORT_*_STATS procedures.

The statistics can then be imported using the DBMS_STATS.IMPORT_*_STATS procedures.

Note that the optimizer does not use statistics stored in a user-owned table. The only statistics used by the optimizer are the statistics stored in the data dictionary.

In order to have the optimizer use the statistics in a user-owned tables, you must import those statistics into the data dictionary using the statistics import procedures.

In order to move statistics from one database to another, you must first export the statistics on the first database, then copy the statistics table to the second database, using the EXP and IMP utilities or other mechanisms, and finally import the statistics into the second database.

The EXP and IMP utilities export and import optimizer statistics from the database along with the table. One exception is that statistics are not exported with the data if a table has columns with system-generated names.

Restoring Statistics Versus Importing or Exporting Statistics

The functionality for restoring statistics is similar in some respects to the functionality of importing and exporting statistics.

In general, you should use the restore capability when:

  • You want to recover older versions of the statistics. For example, to restore the optimizer behavior to an earlier date.
  • You want the database to manage the retention and purging of statistics histories.

You should use EXPORT/IMPORT_*_STATS procedures when:

  • You want to experiment with multiple sets of statistics and change the values back and forth.
  • You want to move the statistics from one database to another database. For example, moving statistics from a production system to a test system.
  • You want to preserve a known set of statistics for a longer period of time than the desired retention date for restoring statistics.
1. Create the Statistics table.
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SCHEMA_NAME' ,stat_tab => 'STATS_TABLE' ,
tblspace => 'STATS_TABLESPACE');
>>>>>>>> For 10G
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SYSTEM',stat_tab => 'STATS_TABLE');
>>>>>>>>  For 9i and earlier
begin
 DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATS_TABLE');
end;

2. Export statistics to statistics table

EXEC DBMS_STATS.EXPORT_SCHEMA_STATS('ORIGINAL_SCHEMA' ,'STATS_TABLE',NULL,'SYSTEM');

3. Import statistics into the data dictionary.
exec DBMS_STATS.IMPORT_SCHEMA_STATS('NEW_SCHEMA','STATS_TABLE',NULL,'SYSTEM');

4. Drop the statistics table.
exec DBMS_STATS.DROP_STAT_TABLE('SYSTEM','STATS_TABLE');
FOR 9i
begin
DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATISTICS_TABLE_060307');
end;
begin
DBMS_STATS.EXPORT_SCHEMA_STATS('SAPBP2' ,'STATISTICS_TABLE_060307',NULL,'DBA_ADMIN');
end;
SQL> exec DBMS_STATS.IMPORT_SCHEMA_STATS('SAGAR','STATISTICS_TABLE_060307',NULL,'SAGAR');
PL/SQL procedure successfully completed.

Monitor export Process >>>>>>>>
select count(*) from &STATS_NAME
Stats table can grow exponentially so look at table size while export is active.
select sum(bytes)/1000000 from dba_extents where segment_name='&TABLE_NAME'
Sample statistics at SAP BW System of size 4.2 Tera bytes
Time Elapsed for Export : 40 Mins
Total stats Table Size : 2GB
Time Elapsed for Import :

How to Validate that Stats are reflected after exp/imp

select TABLE_NAME, NUM_ROWS, BLOCKS, EMPTY_BLOCKS,
AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN
from dba_tables where owner='&USER'
At both Databases and see they are very similar.

Locate Hard hitting SQL from Statpack Repository

Posted by Sagar Patil

1. Login as PERFSTAT user on database.  It won’t work unless U login as PERFSTAT user

2. Find DBID using  “select dbid from stats$sql_summary”

3. Locate MIN(SNAP_ID) pBgnSnap & MAX(SNAP_ID) pEndSnap

SELECT   MIN (snap_id),
 MAX (snap_id),
 MIN (snap_time),
 MAX (snap_time)
 FROM   stats$snapshot
 WHERE       TO_NUMBER (TO_CHAR (snap_time, 'HH24')) > 10
 AND TO_NUMBER (TO_CHAR (snap_time, 'HH24')) < 13
 AND TRUNC (snap_time) = TRUNC (SYSDATE)

Show All SQL Stmts ordered by Logical Reads

SELECT   e.hash_value "E.HASH_VALUE",
 e.module "Module",
 e.buffer_gets - NVL (b.buffer_gets, 0) "Buffer Gets",
 e.executions - NVL (b.executions, 0) "Executions",
 ROUND (
 DECODE (
 (e.executions - NVL (b.executions, 0)),
 0,
 TO_NUMBER (NULL),
 (e.buffer_gets - NVL (b.buffer_gets, 0))
 / (e.executions - NVL (b.executions, 0))
 ),
 3
 )
 "Gets / Execution",
 ROUND (  100
 * (e.buffer_gets - NVL (b.buffer_gets, 0))
 / sp920.getGets (:pDbID,
 :pInstNum,
 :pBgnSnap,
 :pEndSnap,
 'NO'), 3)
 "Percent of Total",
 ROUND ( (e.cpu_time - NVL (b.cpu_time, 0)) / 1000000, 3) "CPU (s)",
 ROUND ( (e.elapsed_time - NVL (b.elapsed_time, 0)) / 1000000, 3)
 "Elapsed (s)",
 ROUND (e.fetches - NVL (b.fetches, 0)) "Fetches",
 sp920.getSQLText (e.hash_value, 400) "SQL Statement"
 FROM   stats$sql_summary e, stats$sql_summary b
 WHERE       b.snap_id(+) = :pBgnSnap
 AND b.dbid(+) = e.dbid
 AND b.instance_number(+) = e.instance_number
 AND b.hash_value(+) = e.hash_value
 AND b.address(+) = e.address
 AND b.text_subset(+) = e.text_subset
 AND e.snap_id = :pEndSnap
 AND e.dbid = :pDbId
 AND e.instance_number = :pInstNum
ORDER BY   3 DESC
Show SQL Stmts where SQL_TEXT like '%'
SELECT   e.hash_value "E.HASH_VALUE",
 e.module "Module",
 e.buffer_gets - NVL (b.buffer_gets, 0) "Buffer Gets",
 e.executions - NVL (b.executions, 0) "Executions",
 ROUND (
 DECODE (
 (e.executions - NVL (b.executions, 0)),
 0,
 TO_NUMBER (NULL),
 (e.buffer_gets - NVL (b.buffer_gets, 0))
 / (e.executions - NVL (b.executions, 0))
 ),
 3
 )
 "Gets / Execution",
 ROUND (  100
 * (e.buffer_gets - NVL (b.buffer_gets, 0))
 / sp920.getGets (:pDbID,
 :pInstNum,
 :pBgnSnap,
 :pEndSnap,
 'NO'), 3)
 "Percent of Total",
 ROUND ( (e.cpu_time - NVL (b.cpu_time, 0)) / 1000000, 3) "CPU (s)",
 ROUND ( (e.elapsed_time - NVL (b.elapsed_time, 0)) / 1000000, 3)
 "Elapsed (s)",
 ROUND (e.fetches - NVL (b.fetches, 0)) "Fetches",
 sp920.getSQLText (e.hash_value, 400) "SQL Statement"
 FROM   stats$sql_summary e, stats$sql_summary b
 WHERE       b.snap_id(+) = :pBgnSnap
 AND b.dbid(+) = e.dbid
 AND b.instance_number(+) = e.instance_number
 AND b.hash_value(+) = e.hash_value
 AND b.address(+) = e.address
 AND b.text_subset(+) = e.text_subset
 AND e.snap_id = :pEndSnap
 AND e.dbid = 2863128100
 AND e.instance_number = :pInstNum
 AND sp920.getSQLText (e.hash_value, 400) LIKE '%ZPV_DATA%'
ORDER BY   3 DESC

Locate Server Workload from Statspack for days in Past

Posted by Sagar Patil

Change a.statistic# to respective value

Stats for Working Hours

select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A, stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and to_char(b.SNAP_TIME,'HH24') > 8
and to_char(b.SNAP_TIME,'HH24') <18
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)

Locate kind of stats you want to pull from statspack

(select * from STATS$SYSSTAT where name like '%XXX%' )
9 session logical reads
Physical Reads
54 Physical Reads
56 Physical reads direct
58 physical read bytes
39 physical read total bytes
42 physical write total bytes
66 physical write bytes
66 physical writes
CPU Related
355 OS Wait-cpu (latency) time
328 parse time cpu
8 recursive cpu usage
Rollback Related -
176 transaction tables consistent read rollbacks
180 rollbacks only - consistent read gets
181 cleanouts and rollbacks - consistent read gets
187 transaction rollbacks
5 user rollbacks
239 IMU CR rollbacks
186 rollback changes - undo records applied
Sample Report built using SQL in post Stats_Report

Stats for Entire Day

select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A, stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)
Top of Page

Top menu