Digging deep in Oracle Grid Repository for Information

Posted by Sagar Patil

How Grid Repository Stores Data?

The agents upload data at MGMT_METRIC_RAW table which is retained for 7 days.
Raw data are aggregated by Grid Control based on a predefined policy into 1-hour records, stored in MGMT_METRIC_1HOUR.
After 31 days or a month, 1-hour records in MGMT_METRIC_1HOUR are subject for further aggregation that rolls into 1-day records.
These 1-day records are stored in MGMT_METRIC_1Day for 365 days or 1 year before purged by Grid Control.

Modify these retention policies by updating the mgmt_parameters table in the OMR.
Table Name                   Retention Parameter                  Retention Days
MGMT_METRICS_RAW             mgmt_raw_keep_window                    7
MGMT_METRICS_1HOUR           mgmt_hour_keep_window                   31
MGMT_METRICS_1DAY            mgmt_day_keep_window                    365

To query data in above tables, we need to know TARGET_GUID and METRIC_GUID.
For easier access using names “target names” or “metric names” we can use 3 views MGMT$METRIC_DETAILS, MGMT$METRIC_HOURLY, and MGMT$METRIC_DAILY.
These views owned by SYSMAN are based on these 3 main tables but having extra columns like ‘target_name’, ‘metric_name’ for easier referrence.

<
Following SQL will return definitions of MGMT$METRIC_DETAILS,METRIC_HOURLY & METRIC_DAILY

select VIEW_NAME
, TEXT_LENGTH
, TEXT
from dba_views
where OWNER = ‘SYSMAN’
and VIEW_NAME in ( ‘MGMT$METRIC_DETAILS’
, ‘MGMT$METRIC_HOURLY’
, ‘MGMT$METRIC_DAILY’ );

Let’s have a look at attributes stored at these views which we could use for reporting.
The common attributes are “Property_name” & “target_type”.

select unique property_name
from mgmt$target_properties
order by 1;

———————————————
CRSHome
CRSSoftwareVersion
CRSVersion
CRSVersionCategory
ClusterName

DBDomain
DBName
DBVersion
InstanceName
SID

DataGuardStatus
DatabaseName
DatabaseType
DatabaseUser
DbBlockSize
OpenMode
OracleHome
StartTime
StatisticsLevel
background_dump_dest
core_dump_dest

ListenAddress
ListenPort
ListenerOraDir
LsnrName

HARDWARE_TYPE
Machine
MachineName
IP_address
CPUCount

OS
OS_patchlevel

RACInstNum
RACOption

select unique target_type
from mgmt$target
order by 1;

———————————————
cluster
composite
host
j2ee_application
metadata_repository
netapp_filer
oracle_csa_collector
oracle_database
oracle_emd
oracle_emrep
oracle_ias_farm
oracle_listener
rac_database
weblogic_domain
weblogic_j2eeserver

Let’s combine these 2 views to locate some interesting facts.

List Targets with TNS Listener ports configured :

SELECT mgmt$target.host_name
 , mgmt$target.target_name
 , mgmt$target.target_type
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 FROM mgmt$target
 , mgmt$target_properties
 WHERE ( mgmt$target.target_name = mgmt$target_properties.target_name )
 AND ( mgmt$target.target_type = mgmt$target_properties.target_type )
 and ( mgmt$target.target_type = 'oracle_listener' )
 and ( mgmt$target_properties.property_name = 'Port' );

Devora02       LISTENER_ora02                       oracle_listener Port 1529
Devora01       LISTENER_ora01                       oracle_listener Port 1529
Devora04       LISTENER_ora04                       oracle_listener Port 1529

List Machine_Names, CPU Count & Database Verion for Licensing

SELECT mgmt$target.host_name
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 FROM mgmt$target
 , mgmt$target_properties
 WHERE ( mgmt$target.target_name = mgmt$target_properties.target_name )
 AND ( mgmt$target.target_type = mgmt$target_properties.target_type )
 AND ( mgmt$target_properties.property_name in ( 'CPUCount','DBVersion' ) )
 GROUP BY mgmt$target.host_name
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 order by mgmt$target.host_name;

List Dataguard Instances mounted in APPLY mode

SELECT mgmt$target.host_name
 , mgmt$target.target_name
 , mgmt$target.target_type
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 FROM mgmt$target
 , mgmt$target_properties
 WHERE ( mgmt$target.target_name = mgmt$target_properties.target_name )
 AND ( mgmt$target.target_type = mgmt$target_properties.target_type )
 and ( mgmt$target.target_type = 'oracle_database' )
 and ( mgmt$target_properties.property_name = 'OpenMode' )
 and PROPERTY_VALUE like 'READ%ONLY%WITH%APPLY%';

List RAC databases and their Attributes like ClusterName, Dataguard Status.
Change "property_name" attribute per your need

SELECT mgmt$target.host_name
 , mgmt$target.target_name
 , mgmt$target.target_type
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 FROM mgmt$target
 , mgmt$target_properties
 WHERE ( mgmt$target.target_name = mgmt$target_properties.target_name )
 AND ( mgmt$target.target_type = mgmt$target_properties.target_type )
 and ( mgmt$target.target_type = 'rac_database' )
 and ( mgmt$target_properties.property_name in ( 'RACOption'
 , 'DBName'
 , 'DBDomain'
 , 'DBVersion'
 , 'ClusterName'
 , 'DataGuardStatus'
 , 'MachineName'
 , 'Role'
 , 'SID' ) )
 order by mgmt$target.host_name, mgmt$target.target_name,
mgmt$target_properties.property_name;  

RACNode1        DEV   rac_database ClusterName     crs1                          
RACNode1        DEV   rac_database DBDomain        oracledbasupport.co.uk                
RACNode1        DEV   rac_database DBName          DEV                         
RACNode1        DEV   rac_database DBVersion       10.2.0.3.0                    
RACNode1        DEV   rac_database DataGuardStatus                               
RACNode1        DEV   rac_database MachineName     RAC01-vip
RACNode1        DEV   rac_database RACOption       YES                           
RACNode1        DEV   rac_database Role            NORMAL                        
RACNode1        DEV   rac_database SID             DEV1                        
RACDGNode1       PROD rac_database ClusterName     RACDGNodecrs                        
RACDGNode1       PROD rac_database DBDomain        oracledbasupport.co.uk                      
RACDGNode1       PROD rac_database DBName          PROD                              
RACDGNode1       PROD rac_database DBVersion       11.2.0.2.0                          
RACDGNode1       PROD rac_database DataGuardStatus Physical Standby                    
RACDGNode1       PROD rac_database MachineName     RACDGNode-vip
RACDGNode1       PROD rac_database RACOption       YES                                 
RACDGNode1       PROD rac_database Role            NORMAL                              
RACDGNode1       PROD rac_database SID             PROD2

If you are having performance hit on Grid database, use following SQL to locate most active segemnts.
You can then think of archiving data in grid else moving them on speedy spindles.

select sum ( B.logical_reads_total )
 , sum ( B.physical_reads_total )
 , sum ( B.physical_writes_total )
 , sum ( buffer_busy_waits_total )
 , c.object_name
 , c.owner
 from DBA_HIST_SNAPSHOT A
 , DBA_HIST_SEG_STAT B
 , dba_objects C
 where A.Snap_id = b.snap_id
 and c.object_id = b.obj#
 and A.BEGIN_INTERVAL_TIME >= to_Date ( '17-May-2011 08:00'
 , 'DD-MON-YYYY HH24:MI' )
 and A.END_INTERVAL_TIME <= to_Date ( '17-May-2011 12:00'
 , 'DD-MON-YYYY HH24:MI' )
 group by c.object_name
 , c.owner
 order by 1 desc;

Change order by
1: For most Read Segments
3: For most Writes on a segment
4: For Waits

SQL to report Oracle init parameters for a Target database

  SELECT   target_name,
           target_type,
           name,
           VALUE
    FROM   MGMT$ECM_VISIBLE_SNAPSHOTS A, SYSMAN.MGMT_DB_INIT_PARAMS_ECM B
   WHERE       A.ECM_SNAPSHOT_ID = B.ECM_SNAPSHOT_ID
              AND TARGET_TYPE = 'rac_database'  -- Choose TARGET_TYPE
           AND name LIKE 'remote_listener%'     -- Look for a relevant Parameter
GROUP BY   target_name,
           target_type,
           name,
           VALUE
ORDER BY   Target_name, name ;

Following SQL will report All Database Target details monitored thru grid

select t.host_name
 as     host
 , ip.property_value IP
 , t.target_name
 as     name
 , decode ( t.type_qualifier4
 , ' '
 , 'Normal'
 , t.type_qualifier4 )
 as     type
 , dbv.property_value
 as     version
 , port.property_value port
 , SID.property_value SID
 , logmode.property_value
 as     "Log Mode"
 , oh.property_value
 as     "Oracle Home"
 from mgmt$target t
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'DBVersion' ) dbv
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'Port' ) port
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'SID' ) sid
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'log_archive_mode' ) logmode
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'OracleHome' ) oh
 , ( select tp.target_name
 as     host_name
 , tp.property_value
 from mgmt$target_properties tp
 where tp.target_type = 'host'
 and tp.property_name = 'IP_address' ) ip
 where t.target_guid = port.target_guid
 and port.target_guid = sid.target_guid
 and sid.target_guid = dbv.target_guid
 and dbv.target_guid = logmode.target_guid
 and logmode.target_guid = oh.target_guid
 and t.host_name = ip.host_name
 order by 1, 3;

Configuring Websphere Plugin with NAGIOS Monitoring System

Posted by Sagar Patil

1. Download WAS plugin for Nagios from here.

2. Place check_was, check_was-<version>.jar and check_was.profiles in the same directory (e.g. /opt/plugins/custom). Make sure check_was is executable by your Nagios user

For my example here, I have following parameters:

Check_was.sh

#!/bin/sh
PLUGIN_HOME=/home/was61/check_was-0.3
JAVA_HOME=/opt/IBM/WebSphere/AppServer/java
WAS_HOME=/opt/IBM/WebSphere/AppServer

$JAVA_HOME/bin/java -Dplugin.home=”$PLUGIN_HOME” -cp $PLUGIN_HOME/check_was-0.3.jar:$WAS_HOME/runtimes/com.ibm.ws.admin.client_6.1.0.jar:$WAS_HOME/runtimes/com.ibm.ws.webservices.thinclient_6.1.0.jar:$WAS_HOME/plugins/com.ibm.ws.security.crypto_6.1.0.jar com.googlecode.nagioswas.Run $*  2> /dev/null

See relevant Jar files above are at respective directories.

“Server_member1″ is name of Application Server (JVM) so add parameters for each JVM suffixed with name of JVM

# I am running websphere with no ADMIN security enabled
server_member1.hostname=Server1
server_member1.port=8882 (Locate SOAP port number from  (DMGR->Servers -> Relevant Application Server -> Communications -> Ports)
server_member1.username=user1
server_member1.password=abcd
server_member1.securityenabled=false

3. Update check_was by setting the environment variables at the start of the script to the appropriate values for your server.

JAVA_HOME : must point to an IBM JRE/JDK.
WAS_HOME  : needs to point to a directory that contains a directory named “runtimes” containing the following WAS libraries: com.ibm.ws.admin.client_<version>.jar and com.ibm.ws.webservices.thinclient_<version>.jar. If you run the plugin on the same server as WAS, WAS_HOME should point to the WAS install directory.

Edit check_was.servers. This file should contain the configuration to connect to your WAS server.

For each server, the following properties should be provided:
<server alias>.hostname=<the hostname or IP of the WAS server>
<server alias>.port=<the port of the SOAP connector on the server, e.g. 8880>
<server alias>.username=<the admin user name>
<server alias>.password=<the admin password>
<server alias>.securityenabled=<true if security is enabled, false otherwise>
<server alias>.truststore=<the path to the keystore containing the certificated to be used for SSL. If you are running the plugin on your WAS server and use the default WAS keystores, this should point to etc/trust.p12 in your profile>
<server alias>.truststorepassword=<the password for the trust store>
<server alias>.keystore=<the path to the keystore containing the private key to be used for SSL. If you are running the plugin on your WAS server and use the default WAS keystores, this should point to etc/key.p12 in your profile>
<server alias>.keystorepassword=<the password for the key store>

-w sets the threshold percent used for issuing warnings
-c sets the threshold percent used for issuing critical issues
-p sets the server name in check_was.servers to be used
<server name>  : JVM used with scripts stopServer.sh/startServer.sh here server_member1

Monitor JVM heapSize :
JVM heapsize is provided for the entire server. It is measured as: percent used/maximum configured
To Monitor, check_was -s heapsize -w 80 -c 90 -p <server name>

[was61@Server1 check_was-0.3]$ ./check_was -s heapsize -w 80 -c 90 -p server_member1
OK – heapsize: 1048576/2097152 (50.0%)|heapsize=50.0%;80;90;

MonitorLiveSessions :
Live session usage can be monitored for the entire server (all hosts) or with a named host. It is measured as: Number of live sessions

To monitor,
[was61@Server1 check_was-0.3]$ ./check_was -s sessions -w 200 -c 400 -p server_member1
OK – live sessions: total 0, default_hostCTI 0, default_hostprsysmgmt 0, default_hostprweb 0, default_hostprdbutil 0|total=0.0;200;400; default_hostcti=0.0;200;400; default_hostprsysmgmt=0.0;200;400; default_hostprweb=0.0;200;400; default_hostprdbutil=0.0;200;400;

MonitorJdbcConnectionPools:
JDBC connection pool usage can be monitored for the entire server (all connection pools) or with a named connection pool. It is measured as: percent used/maximum configured

To monitor :
[was61@Server1 check_was-0.3]$ ./check_was -s connectionpool -w 80 -c 90 -p server_member
OK – connection pool size: Oracle JDBC Driver 5/100 (5.0%)|oraclejdbcdriver=5.0%;80;90;

MonitorThreadPools :
Thread pool usage can be monitored for the entire server (all thread pools) or with a named thread pool. It is measured as: percent used/maximum configured

To monitor :
[was61@Server1 check_was-0.3]$ ./check_was -s threadpool -w 80 -c 90 -p server_member1
CRITICAL – thread pool size: WebContainer 4/100 (4.0%), SoapConnectorThreadPool 3/5 (60.0%), SIBFAPInboundThreadPool 0/50 (0.0%), HAManager.thread.pool 2/2 (100.0%), MessageListenerThreadPool 0/50 (0.0%), ORB.thread.pool 0/50 (0.0%), SIBFAPThreadPool 2/50 (4.0%), ProcessDiscovery 1/2 (50.0%), TCPChannel.DCS 3/20 (15.0%)|webcontainer=4.0%;80;90; soapconnectorthreadpool=60.0%;80;90; sibfapinboundthreadpool=0.0%;80;90; hamanager_thread_pool=100.0%;80;90; messagelistenerthreadpool=0.0%;80;90; orb_thread_pool=0.0%;80;90; sibfapthreadpool=4.0%;80;90; processdiscovery=50.0%;80;90; tcpchannel_dcs=15.0%;80;90;

DWH Terms

Posted by Sagar Patil

 

 
Measure A property on which calculations (e.g., sum, count, average, minimum, maximum) can be made
Regular dimension Standard star dimension
Time Dimension A special case of the standard star dimension
Parent-child dimension Used to model hierarchical structures. The dimension schema of a parent-child hierarchy depends on a self-referencing relationship present on the dimension main table. For example, the following diagram illustrates the DimOrganization dimension main table in the AdventureWorksDW2012sample database.

Self-referencing join in DimOrganization table
Snowflake dimension Can also be used to model hierarchical structures. Used for Many to Many mappings
Degenerate dimension  A degenerate dimension is a dimension key in the fact table that does not have its own dimension table, because all the interesting attributes have been placed in analytic dimensions.
Junk dimension A single table with a combination of different and unrelated attributes to avoid having a large number of foreign keys in the fact table. Junk dimensions are often created to manage the foreign keys created by Rapidly Changing Dimensions. Typically used for low cardinality, non-related dimensions like gender or other booleans.
Role playing dimensions For instance, a “Date” dimension can be used for “Date of Sale”, as well as “Date of Delivery”, or “Date of Hire”
Mini dimensions For rapidly changing large dimensions. Typically used for managing high frequency, low cardinality change in a dimension.
Conformed dimensions Implemented in multiple database tables using the same structure, attributes, domain values, definitions and concepts in each implementation. Also seen under the name Shared dimension.
Monster Dimension A very large dimension
Shrunk dimension Is a subset of a dimension’s attributes that apply to a higher level of summary. For example, a Month dimension would be a shrunken dimension of the Date dimension. The Month dimension could be connected to a forecast fact table whose grain is at the monthly level. Dimension.
Inferred Dimensions While loading fact records, a dimension record may not yet be ready. One solution is to generate an surrogate key with Null for all the other attributes. This should technically be called an inferred member, but is often called an inferred dimension.
Static Dimension It not extracted from the original data source, but are created within the context of the data warehouse. A static dimension can be loaded manually — for example with Status codes — or it can be generated by a procedure, such as a Date or Time dimension.
Multi value Dimension Is simply a bridge table between the entities involved in the many-to-many relationship. It is also possible that the many-to-many is between a fact and dimension.
Dynamic dimensions Type 0: Retain OriginalThe Type 0 manages dimensional changes and no action is performed. Values remain as they were at the time the dimension record was first inserted. Type 0 provides the least or no control. This is rarely used.

Type 1: Overwrite


This methodology overwrites old with new data, and therefore does not track historical data.To  

Type 2: Add new row

This method tracks historical data by creating multiple records for a given natural key in the dimensional tables with separate surrogate keys and/or different version numbers. Unlimited history is preserved for each insert.

OR

Type 3: Add New Attribute
This method tracks changes using separate columns and preserves limited history. The Type 3 preserves limited history as it is limited to the number of columns designated for storing historical data.

 

Type 4: Add History Table
The Type 4 method is usually referred to as using “history tables”, where one table keeps the current data, and an additional table is used to keep a record of some or all changes.

 

Slowly changing dimension/Rapidly changing dimension Attributes of a dimension that would undergo changes over time.
Slowly Growing Dimension/Rapidly Growing Dimension Relates to the growth of records/elements in the dimension.
Data Mining Dimensions
Write-Enabled Dimensions The data in a dimension is generally read-only. However, for certain scenarios, you may want to write-enable a dimension. In Microsoft SQL Server Analysis Services, write-enabling a dimension enables business users to modify the contents of the dimension and see the immediate affect of changes on the hierarchies of the dimension.
Dependent Dimensions
Independent Dimensions
Primary Dimensions
Secondary Dimensions
Tertiary Dimensions
Informational dimension
Dimension triage dimension Non-conforming dimensions from the general ledger
Reference dimension A reference dimension relationship between a cube dimension and a measure group exists when the key column for the dimension is joined indirectly to the fact table through a key in another dimension table in a snowflake schema designIllustration of a reference dimension
Data quality dimension Some authors suggest adding a special dimension called a data quality dimension to describe each facttable-record further.

Hadoop Components

Posted by Sagar Patil

Quick Info

Flume : Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of streaming data into the Hadoop Distributed File System (HDFS).

Sqoop 2: Sqoop is a Data Export tool used for RDBMS and Hadoop ecosystem. Sqoop is the tool to transfer data between RDBMS to Hadoop and vise versa.

Pig : Pig is an open-source high-level dataflow system. It provides a simple language for queries and data manipulation Pig Latin, that is compiled into map-reduce jobs that are run on Hadoop.

HBase :  HBase is an open source, non-relational, distributed database modeled after Google’s BigTable and written in Java.

Hive : The Apache Hive ™ data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL.

Hue: Hue is an open-source Web interface that supports Apache Hadoop and its ecosystem.
Hue aggregates most common Apache Hadoop components into a single interface and targets user experience. Its main goal is to have users “just use” Hadoop without worrying about the underlying complexity or using a command line.

Impala : Impala is a fully integrated, state-of-the-art analytic database architected specifically to leverage the flexibility and scalability strengths of Hadoop – combining the familiar SQL support and multi-user performance of a traditional analytic database with the rock-solid foundation of open source Apache Hadoop and the production-grade security and management extensions of Cloudera Enterprise.

Key-Value Store Indexer:  The Key-Value Store Indexer service uses the Lily HBase NRT Indexer to index the stream of records being added to HBase tables. Indexing allows you to query data stored in HBase with the Solr service.

The Key-Value Store Indexer service is installed in the same parcel or package along with the CDH 5 or Solr service. The Indexer service depends on the HBase, HDFS, Solr, and ZooKeeper services.

Oozie : Oozie is a workflow scheduler system to manage Apache Hadoop jobs. Oozie Workflow jobs are Directed Acyclical Graphs (DAGs) of actions. Oozie Coordinator jobs are recurrent Oozie Workflow jobs triggered by time (frequency) and data availabilty.

Solr : Apache Solr is the open source platform for searches of data stored in HDFS in Hadoop.

Spark : Apache Spark is an open-source cluster computing framework. In contrast to Hadoop’s two-stage disk-based MapReduce paradigm, Spark’s in-memory primitives provide performance up to 100 times faster for certain applications.

YARN (MR2 Included) : YARN is the architectural center of Hadoop that allows multiple data processing engines such as interactive SQL, real-time streaming, data science and batch processing to handle data stored in a single platform, unlocking an entirely new approach to analytics.

ZooKeeper : ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications

Protected: Call Detail Record Generator

Posted by Sagar Patil

This content is password protected. To view it please enter your password below:

Import ERWin Data Model into Visio

Posted by Sagar Patil

 

1. Export ERWin file into External Format

 

2. Import this file at Visio 2010

File -> New -> Software and Database Design template, then click on Database Ribbon and click on “Import”

 

SQL Queries for DMVs

Posted by Sagar Patil

image

 

INDEX DMVs

1. Find fill factor of the indexes on the tables in  the current databas

SELECT DB_NAME() AS DatabaseName
, SCHEMA_NAME(o.Schema_ID) AS SchemaName
, OBJECT_NAME(s.[object_id]) AS TableName
, i.name AS IndexName
, i.fill_factor
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id]
AND s.index_id = i.index_id
INNER JOIN sys.objects o ON i.object_id = O.object_id
WHERE s.database_id = DB_ID()
AND i.name IS NOT NULL
AND OBJECTPROPERTY(s.[object_id], 'IsMsShipped') = 0
ORDER BY fill_factor DESC

2.  Locate fragmentation percentage of the indexes on a table within a database named warehousedw

SELECT i.name AS IndexName
, ROUND(s.avg_fragmentation_in_percent,2) AS [Fragmentation %]
FROM sys.dm_db_index_physical_stats(DB_ID(‘warehousedw’),
OBJECT_ID('table'), NULL, NULL, NULL) s
INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id]
ANDs.index_id = i.index_id

3. Identifying the most important missing indexes

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
ROUND(s.avg_total_user_cost *
s.avg_user_impact
* (s.user_seeks + s.user_scans),0)
AS [Total Cost]
, d.[statement] AS [Table Name]
, equality_columns
, inequality_columns
, included_columns
FROM sys.dm_db_missing_index_groups g
INNER JOIN sys.dm_db_missing_index_group_stats s
ON s.group_handle = g.index_group_handle
INNER JOIN sys.dm_db_missing_index_details d
ON d.index_handle = g.index_handle
ORDER BY [Total Cost] DESC

4. The most-costly unused indexes

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
DB_NAME() AS DatabaseName
, SCHEMA_NAME(o.Schema_ID) AS SchemaName
, OBJECT_NAME(s.[object_id]) AS TableName
, i.name AS IndexName
, s.user_updates
, s.system_seeks + s.system_scans + s.system_lookups
AS [System usage]
INTO #TempUnusedIndexes
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id]
AND s.index_id = i.index_id
INNER JOIN sys.objects o ON i.object_id = O.object_id
WHERE 1=2
EXEC sp_MSForEachDB 'USE [?]; INSERT INTO #TempUnusedIndexes SELECT TOP 20 DB_NAME() AS DatabaseName , SCHEMA_NAME(o.Schema_ID) AS SchemaName , OBJECT_NAME(s.[object_id]) AS TableName , i.name AS IndexName , s.user_updates , s.system_seeks + s.system_scans + s.system_lookups AS [System usage] FROM sys.dm_db_index_usage_stats s INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id INNER JOIN sys.objects o ON i.object_id = O.object_id WHERE s.database_id = DB_ID() AND OBJECTPROPERTY(s.[object_id], ''IsMsShipped'') = 0 AND s.user_seeks = 0 AND s.user_scans = 0 AND s.user_lookups = 0 AND i.name IS NOT NULL ORDER BY s.user_updates DESC'
SELECT TOP 20 * FROM #TempUnusedIndexes ORDER BY [user_updates] DESC
DROP TABLE #TempUnusedIndexes

5. Finding the top high-maintenance indexes

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
DB_NAME() AS DatabaseName
, SCHEMA_NAME(o.Schema_ID) AS SchemaName
, OBJECT_NAME(s.[object_id]) AS TableName
, i.name AS IndexName
, (s.user_updates ) AS [update usage]
, (s.user_seeks + s.user_scans + s.user_lookups)
AS [Retrieval usage]
, (s.user_updates) -
(s.user_seeks + s.user_scans + s.user_lookups) AS [Maintenance cost]
, s.system_seeks + s.system_scans + s.system_lookups AS [System usage]
, s.last_user_seek
, s.last_user_scan
, s.last_user_lookup
INTO #TempMaintenanceCost
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id]
AND s.index_id = i.index_id
INNER JOIN sys.objects o ON i.object_id = O.object_id
WHERE 1=2
EXEC sp_MSForEachDB 'USE [?]; INSERT INTO #TempMaintenanceCost SELECT TOP 20 DB_NAME() AS DatabaseName , SCHEMA_NAME(o.Schema_ID) AS SchemaName , OBJECT_NAME(s.[object_id]) AS TableName , i.name AS IndexName , (s.user_updates ) AS [update usage] , (s.user_seeks + s.user_scans + s.user_lookups) AS [Retrieval usage] , (s.user_updates) - (s.user_seeks + user_scans + s.user_lookups) AS [Maintenance cost] , s.system_seeks + s.system_scans + s.system_lookups AS [System usage] , s.last_user_seek , s.last_user_scan , s.last_user_lookup FROM sys.dm_db_index_usage_stats s INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id INNER JOIN sys.objects o ON i.object_id = O.object_id WHERE s.database_id = DB_ID() AND i.name IS NOT NULL AND OBJECTPROPERTY(s.[object_id], ''IsMsShipped'') = 0 AND (s.user_seeks + s.user_scans + s.user_lookups) > 0 ORDER BY [Maintenance cost] DESC'
SELECT top 20 * FROM #TempMaintenanceCost ORDER BY [Maintenance cost] DESC
DROP TABLE #TempMaintenanceCost

6. Finding the most-used indexes

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
DB_NAME() AS DatabaseName
, SCHEMA_NAME(o.Schema_ID) AS SchemaName
, OBJECT_NAME(s.[object_id]) AS TableName
, i.name AS IndexName
, (s.user_seeks + s.user_scans + s.user_lookups) AS [Usage]
, s.user_updates
, i.fill_factor
INTO #TempUsage
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id]
AND s.index_id = i.index_id
INNER JOIN sys.objects o ON i.object_id = O.object_id
WHERE 1=2
EXEC sp_MSForEachDB 'USE [?]; INSERT INTO #TempUsage SELECT TOP 20 DB_NAME() AS DatabaseName , SCHEMA_NAME(o.Schema_ID) AS SchemaName , OBJECT_NAME(s.[object_id]) AS TableName , i.name AS IndexName , (s.user_seeks + s.user_scans + s.user_lookups) AS [Usage] , s.user_updates , i.fill_factor FROM sys.dm_db_index_usage_stats s INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id INNER JOIN sys.objects o ON i.object_id = O.object_id WHERE s.database_id = DB_ID() AND i.name IS NOT NULL AND OBJECTPROPERTY(s.[object_id], ''IsMsShipped'') = 0 ORDER BY [Usage] DESC'
SELECT TOP 20 * FROM #TempUsage ORDER BY [Usage] DESC
DROP TABLE #TempUsage
7. Finding the most-fragmented indexes
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
DB_NAME() AS DatbaseName
, SCHEMA_NAME(o.Schema_ID) AS SchemaName
, OBJECT_NAME(s.[object_id]) AS TableName
, i.name AS IndexName
, ROUND(s.avg_fragmentation_in_percent,2) AS [Fragmentation %]
INTO #TempFragmentation
FROM sys.dm_db_index_physical_stats(db_id(),null, null, null, null) s
INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id]
AND s.index_id = i.index_id
INNER JOIN sys.objects o ON i.object_id = O.object_id
WHERE 1=2
EXEC sp_MSForEachDB 'USE [?]; INSERT INTO #TempFragmentation SELECT TOP 20 DB_NAME() AS DatbaseName , SCHEMA_NAME(o.Schema_ID) AS SchemaName , OBJECT_NAME(s.[object_id]) AS TableName , i.name AS IndexName , ROUND(s.avg_fragmentation_in_percent,2) AS [Fragmentation %] FROM sys.dm_db_index_physical_stats(db_id(),null, null, null, null) s INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id INNER JOIN sys.objects o ON i.object_id = O.object_id WHERE s.database_id = DB_ID() AND i.name IS NOT NULL AND OBJECTPROPERTY(s.[object_id], ''IsMsShipped'') = 0 ORDER BY [Fragmentation %] DESC'
SELECT top 20 * FROM #TempFragmentation ORDER BY [Fragmentation %] DESC
DROP TABLE #TempFragmentation

8. The databases with the most missing indexes

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
DB_NAME(database_id) AS DatabaseName
, COUNT(*) AS [Missing Index Count]
FROM sys.dm_db_missing_index_details
GROUP BY DB_NAME(database_id)
ORDER BY [Missing Index Count] DESC
9. Indexes that aren’t used at all
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
DB_NAME() AS DatbaseName
, SCHEMA_NAME(O.Schema_ID) AS SchemaName
, OBJECT_NAME(I.object_id) AS TableName
, I.name AS IndexName
INTO #TempNeverUsedIndexes
FROM sys.indexes I INNER JOIN sys.objects O ON I.object_id = O.object_id
WHERE 1=2
EXEC sp_MSForEachDB 'USE [?]; INSERT INTO #TempNeverUsedIndexes SELECT DB_NAME() AS DatbaseName , SCHEMA_NAME(O.Schema_ID) AS SchemaName , OBJECT_NAME(I.object_id) AS TableName , I.NAME AS IndexName FROM sys.indexes I INNER JOIN sys.objects O ON I.object_id = O.object_id LEFT OUTER JOIN sys.dm_db_index_usage_stats S ON S.object_id = I.object_id AND I.index_id = S.index_id AND DATABASE_ID = DB_ID() WHERE OBJECTPROPERTY(O.object_id,''IsMsShipped'') = 0 AND I.name IS NOT NULL AND S.object_id IS NULL'
SELECT * FROM #TempNeverUsedIndexes
ORDER BY DatbaseName, SchemaName, TableName, IndexName
DROP TABLE #TempNeverUsedIndexes

10. Finding the state of your statistics

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
ss.name AS SchemaName
, st.name AS TableName
, s.name AS IndexName
, STATS_DATE(s.id,s.indid) AS 'Statistics Last Updated'
, s.rowcnt AS 'Row Count'
, s.rowmodctr AS 'Number Of Changes'
, CAST((CAST(s.rowmodctr AS DECIMAL(28,8))/CAST(s.rowcnt AS
DECIMAL(28,2)) * 100.0)
AS DECIMAL(28,2)) AS '% Rows Changed'
FROM sys.sysindexes s
INNER JOIN sys.tables st ON st.[object_id] = s.[id]
INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]
WHERE s.id > 100
AND s.indid > 0
AND s.rowcnt >= 500
ORDER BY SchemaName, TableName, IndexName
QUERY DMVs

11. How to find a cached plan

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
st.text AS [SQL]
, cp.cacheobjtype
, cp.objtype
, COALESCE(DB_NAME(st.dbid),
DB_NAME(CAST(pa.value AS INT))+'*',
'Resource') AS [DatabaseName]
, cp.usecounts AS [Plan usage]
, qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
OUTER APPLY sys.dm_exec_plan_attributes(cp.plan_handle) pa
WHERE pa.attribute = 'dbid'
AND st.text LIKE '%PartyType%'
12. Finding where a query is used
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1)
LIKE '%insert into dbo.deal%'

13. The queries that take the longest time to run

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
CAST(qs.total_elapsed_time / 1000000.0 AS DECIMAL(28, 2))
AS [Total Duration (s)]
, CAST(qs.total_worker_time * 100.0 / qs.total_elapsed_time
AS DECIMAL(28, 2)) AS [% CPU]
, CAST((qs.total_elapsed_time - qs.total_worker_time)* 100.0 /
qs.total_elapsed_time AS DECIMAL(28, 2)) AS [% Waiting]
, qs.execution_count
, CAST(qs.total_elapsed_time / 1000000.0 / qs.execution_count
AS DECIMAL(28, 2)) AS [Average Duration (s)]
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qs.total_elapsed_time > 0
ORDER BY qs.total_elapsed_time DESC

14. The queries spend the longest time being blocked

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
CAST((qs.total_elapsed_time - qs.total_worker_time) /
1000000.0 AS DECIMAL(28,2)) AS [Total time blocked (s)]
, CAST(qs.total_worker_time * 100.0 / qs.total_elapsed_time
AS DECIMAL(28,2)) AS [% CPU]
, CAST((qs.total_elapsed_time - qs.total_worker_time)* 100.0 /
qs.total_elapsed_time AS DECIMAL(28, 2)) AS [% Waiting]
, qs.execution_count
, CAST((qs.total_elapsed_time - qs.total_worker_time) / 1000000.0
/ qs.execution_count AS DECIMAL(28, 2)) AS [Blocking average (s)]
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qs.total_elapsed_time > 0
ORDER BY [Total time blocked (s)] DESC

15. The queries that use the most CPU

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
CAST((qs.total_worker_time) / 1000000.0 AS DECIMAL(28,2))
AS [Total CPU time (s)]
, CAST(qs.total_worker_time * 100.0 / qs.total_elapsed_time
AS DECIMAL(28,2)) AS [% CPU]
, CAST((qs.total_elapsed_time - qs.total_worker_time)* 100.0 /
qs.total_elapsed_time AS DECIMAL(28, 2)) AS [% Waiting]
, qs.execution_count
, CAST((qs.total_worker_time) / 1000000.0
/ qs.execution_count AS DECIMAL(28, 2)) AS [CPU time average (s)]
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qs.total_elapsed_time > 0
ORDER BY [Total CPU time (s)] DESC

16. The queries that use the most I/O

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
[Total IO] = (qs.total_logical_reads + qs.total_logical_writes)
, [Average IO] = (qs.total_logical_reads + qs.total_logical_writes) /
qs.execution_count
, qs.execution_count
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY [Total IO] DESC

17. The queries that have been executed the most often

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
qs.execution_count
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.execution_count DESC;

18. Finding when a query was last run

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT DISTINCT TOP 20
qs.last_execution_time
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
WHERE qt.text LIKE '%CREATE PROCEDURE%List%PickList%'
ORDER BY qs.last_execution_time DESC

19. Finding when a table was last inserted

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
qs.last_execution_time
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
WHERE SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1)
LIKE '%INSERT INTO dbo.Underlying%'
ORDER BY qs.last_execution_time DESC
The listing is very similar to the other scripts

20. Finding queries with missing statistics

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
st.text AS [Parent Query]
, DB_NAME(st.dbid)AS [DatabaseName]
, cp.usecounts AS [Usage Count]
, qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
WHERE CAST(qp.query_plan AS NVARCHAR(MAX))
LIKE '%<ColumnsWithNoStatistics>%'
ORDER BY cp.usecounts DESC

21. Finding your default statistics options

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT name AS DatabaseName
, is_auto_create_stats_on AS AutoCreateStatistics
, is_auto_update_stats_on AS AutoUpdateStatistics
, is_auto_update_stats_async_on
AS AutoUpdateStatisticsAsync
FROM sys.databases
ORDER BY DatabaseName
22. Finding disparate columns with different data types
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
COLUMN_NAME
,[%] = CONVERT(DECIMAL(12,2),COUNT(COLUMN_NAME)*
100.0 / COUNT(*)OVER())
INTO #Prevalence
FROM INFORMATION_SCHEMA.COLUMNS
GROUP BY COLUMN_NAME
SELECT DISTINCT
C1.COLUMN_NAME
, C1.TABLE_SCHEMA
, C1.TABLE_NAME
, C1.DATA_TYPE
, C1.CHARACTER_MAXIMUM_LENGTH
, C1.NUMERIC_PRECISION
, C1.NUMERIC_SCALE
, [%]
FROM INFORMATION_SCHEMA.COLUMNS C1
INNER JOIN INFORMATION_SCHEMA.COLUMNS C2 ON C1.COLUMN_NAME =
C2.COLUMN_NAME
INNER JOIN #Prevalence p ON p.COLUMN_NAME = C1.COLUMN_NAME
WHERE ((C1.DATA_TYPE != C2.DATA_TYPE)
OR (C1.CHARACTER_MAXIMUM_LENGTH != C2.CHARACTER_MAXIMUM_LENGTH)
OR (C1.NUMERIC_PRECISION != C2.NUMERIC_PRECISION)
OR (C1.NUMERIC_SCALE != C2.NUMERIC_SCALE))
ORDER BY [%] DESC, C1.COLUMN_NAME, C1.TABLE_SCHEMA, C1.TABLE_NAME
DROP TABLE #Prevalence

23. Finding queries that are running slower than normal

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 100
qs.execution_count AS [Runs]
, (qs.total_worker_time - qs.last_worker_time) / (qs.execution_count - 1)
AS [Avg time]
, qs.last_worker_time AS [Last time]
, (qs.last_worker_time - ((qs.total_worker_time - qs.last_worker_time) /
(qs.execution_count - 1))) AS [Time Deviation]
, CASE WHEN qs.last_worker_time = 0
THEN 100
ELSE (qs.last_worker_time - ((qs.total_worker_time -
qs.last_worker_time) / (qs.execution_count - 1))) * 100
END
/ (((qs.total_worker_time - qs.last_worker_time) /
(qs.execution_count - 1.0))) AS [% Time Deviation]
,qs.last_logical_reads + qs.last_logical_writes + qs.last_physical_reads
AS [Last IO]
, ((qs.total_logical_reads + qs.total_logical_writes +
qs.total_physical_reads) -
(qs.last_logical_reads + last_logical_writes
+ qs.last_physical_reads))
/ (qs.execution_count - 1) AS [Avg IO]
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS [DatabaseName]
INTO #SlowQueries
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) qt
WHERE qs.execution_count > 1
AND qs.total_worker_time != qs.last_worker_time
ORDER BY [% Time Deviation] DESC
SELECT TOP 100 [Runs]
, [Avg time]
, [Last time]
, [Time Deviation]
, [% Time Deviation]
, [Last IO]
, [Avg IO]
, [Last IO] - [Avg IO] AS [IO Deviation]
, CASE WHEN [Avg IO] = 0
THEN 0
ELSE ([Last IO]- [Avg IO]) * 100 / [Avg IO]
END AS [% IO Deviation]
, [Individual Query]
, [Parent Query]
, [DatabaseName]
INTO #SlowQueriesByIO
FROM #SlowQueries
ORDER BY [% Time Deviation] DESC
SELECT TOP 100
[Runs]
, [Avg time]
, [Last time]
, [Time Deviation]
, [% Time Deviation]
, [Last IO]
, [Avg IO]
, [IO Deviation]
, [% IO Deviation]
, [Impedance] = [% Time Deviation] - [% IO Deviation]
, [Individual Query]
, [Parent Query]
, [DatabaseName]
FROM #SlowQueriesByIO
WHERE [% Time Deviation] - [% IO Deviation] > 20
ORDER BY [Impedance] DESC
DROP TABLE #SlowQueries
DROP TABLE #SlowQueriesByIO

24. Finding unused stored procedures

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT s.name, s.type_desc
FROM sys.procedures s
LEFT OUTER JOIN sys.dm_exec_procedure_stats d
ON s.object_id = d.object_id
WHERE d.object_id IS NULL
ORDER BY s.name

25. Which queries run over a given time period

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT sql_handle, plan_handle, total_elapsed_time
, total_worker_time, total_logical_reads, total_logical_writes
, total_clr_time, execution_count, statement_start_offset
, statement_end_offset
INTO #PreWorkSnapShot
FROM sys.dm_exec_query_stats
WAITFOR DELAY '00:05:00'
SELECT sql_handle, plan_handle, total_elapsed_time
, total_worker_time, total_logical_reads, total_logical_writes
, total_clr_time, execution_count, statement_start_offset
, statement_end_offset
INTO #PostWorkSnapShot
FROM sys.dm_exec_query_stats
SELECT
p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0) AS [Duration]
, p2.total_worker_time - ISNULL(p1.total_worker_time, 0) AS [Time on CPU]
, (p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0)) -
(p2.total_worker_time - ISNULL(p1.total_worker_time, 0))
AS [Time blocked]
, p2.total_logical_reads - ISNULL(p1.total_logical_reads, 0) AS [Reads]
, p2.total_logical_writes - ISNULL(p1.total_logical_writes, 0)
AS [Writes]
, p2.total_clr_time - ISNULL(p1.total_clr_time, 0) AS [CLR time]
, p2.execution_count - ISNULL(p1.execution_count, 0) AS [Executions]
, SUBSTRING (qt.text,p2.statement_start_offset/2 + 1,
((CASE WHEN p2.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE p2.statement_end_offset
END - p2.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM #PreWorkSnapShot p1
RIGHT OUTER JOIN
#PostWorkSnapShot p2 ON p2.sql_handle =
ISNULL(p1.sql_handle, p2.sql_handle)
AND p2.plan_handle = ISNULL(p1.plan_handle, p2.plan_handle)
AND p2.statement_start_offset =
ISNULL(p1.statement_start_offset, p2.statement_start_offset)
AND p2.statement_end_offset =
ISNULL(p1.statement_end_offset, p2.statement_end_offset)
CROSS APPLY sys.dm_exec_sql_text(p2.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(p2.plan_handle) qp
WHERE p2.execution_count != ISNULL(p1.execution_count, 0)
AND qt.text NOT LIKE '--ThisRoutineIdentifier99%'
ORDER BY [Duration] DESC
DROP TABLE #PreWorkSnapShot
DROP TABLE #PostWorkSnapShot
26. Amalgamated DMV snapshots
--ThisRoutineIdentifier
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
index_group_handle, index_handle
, avg_total_user_cost, avg_user_impact, user_seeks, user_scans
INTO #PreWorkMissingIndexes
FROM sys.dm_db_missing_index_groups g
INNER JOIN sys.dm_db_missing_index_group_stats s
ON s.group_handle = g.index_group_handle
SELECT
sql_handle, plan_handle, total_elapsed_time, total_worker_time
, total_logical_reads, total_logical_writes, total_clr_time
, execution_count, statement_start_offset, statement_end_offset
INTO #PreWorkQuerySnapShot
FROM sys.dm_exec_query_stats
SELECT
[object_name], [counter_name], [instance_name]
, [cntr_value], [cntr_type]
INTO #PreWorkOSSnapShot
FROM sys.dm_os_performance_counters
SELECT
wait_type, waiting_tasks_count
, wait_time_ms, max_wait_time_ms, signal_wait_time_ms
INTO #PreWorkWaitStats
FROM sys.dm_os_wait_stats
WAITFOR DELAY '00:05:00'
SELECT wait_type, waiting_tasks_count, wait_time_ms
, max_wait_time_ms, signal_wait_time_ms
INTO #PostWorkWaitStats
FROM sys.dm_os_wait_stats
SELECT [object_name], [counter_name], [instance_name]
, [cntr_value], [cntr_type]
INTO #PostWorkOSSnapShot
FROM sys.dm_os_performance_counters
SELECT sql_handle, plan_handle, total_elapsed_time, total_worker_time
, total_logical_reads, total_logical_writes, total_clr_time
, execution_count, statement_start_offset, statement_end_offset
INTO #PostWorkQuerySnapShot
FROM sys.dm_exec_query_stats
SELECT index_group_handle, index_handle
, avg_total_user_cost, avg_user_impact, user_seeks, user_scans
INTO #PostWorkMissingIndexes
FROM sys.dm_db_missing_index_groups g
INNER JOIN sys.dm_db_missing_index_group_stats s
ON s.group_handle = g.index_group_handle
SELECT
p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0) AS [Duration]
, p2.total_worker_time - ISNULL(p1.total_worker_time, 0) AS [Time on CPU]
, (p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0)) -
(p2.total_worker_time - ISNULL(p1.total_worker_time, 0))
AS [Time blocked]
, p2.total_logical_reads - ISNULL(p1.total_logical_reads, 0) AS [Reads]
, p2.total_logical_writes - ISNULL(p1.total_logical_writes, 0)
AS [Writes]
, p2.total_clr_time - ISNULL(p1.total_clr_time, 0) AS [CLR time]
, p2.execution_count - ISNULL(p1.execution_count, 0) AS [Executions]
, SUBSTRING (qt.text,p2.statement_start_offset/2 + 1,
((CASE WHEN p2.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE p2.statement_end_offset
END - p2.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
FROM #PreWorkQuerySnapShot p1
RIGHT OUTER JOIN
#PostWorkQuerySnapShot p2 ON p2.sql_handle =
ISNULL(p1.sql_handle, p2.sql_handle)
AND p2.plan_handle = ISNULL(p1.plan_handle, p2.plan_handle)
AND p2.statement_start_offset =
ISNULL(p1.statement_start_offset, p2.statement_start_offset)
AND p2.statement_end_offset =
ISNULL(p1.statement_end_offset, p2.statement_end_offset)
CROSS APPLY sys.dm_exec_sql_text(p2.sql_handle) as qt
WHERE p2.execution_count != ISNULL(p1.execution_count, 0)
AND qt.text NOT LIKE '--ThisRoutineIdentifier%'
SELECT
p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) AS wait_time_ms
, p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0)
AS signal_wait_time_ms
, ((p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0))
- (p2.signal_wait_time_ms
- ISNULL(p1.signal_wait_time_ms, 0))) AS RealWait
, p2.wait_type
FROM #PreWorkWaitStats p1
RIGHT OUTER JOIN
#PostWorkWaitStats p2 ON p2.wait_type = ISNULL(p1.wait_type, p2.wait_type)
WHERE p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) > 0
AND p2.wait_type NOT LIKE '%SLEEP%'
AND p2.wait_type != 'WAITFOR'
ORDER BY RealWait DESC
SELECT
ROUND((p2.avg_total_user_cost - ISNULL(p1.avg_total_user_cost, 0))
* (p2.avg_user_impact - ISNULL(p1.avg_user_impact, 0)) *
((p2.user_seeks - ISNULL(p1.user_seeks, 0))
+ (p2.user_scans - ISNULL(p1.user_scans, 0))),0)
AS [Total Cost]
, p2.avg_total_user_cost - ISNULL(p1.avg_total_user_cost, 0)
AS avg_total_user_cost
, p2.avg_user_impact - ISNULL(p1.avg_user_impact, 0) AS avg_user_impact
, p2.user_seeks - ISNULL(p1.user_seeks, 0) AS user_seeks
, p2.user_scans - ISNULL(p1.user_scans, 0) AS user_scans
, d.statement AS TableName
, d.equality_columns
, d.inequality_columns
, d.included_columns
FROM #PreWorkMissingIndexes p1
RIGHT OUTER JOIN
#PostWorkMissingIndexes p2 ON p2.index_group_handle =
ISNULL(p1.index_group_handle, p2.index_group_handle)
AND p2.index_handle =
ISNULL(p1.index_handle, p2.index_handle)
INNER JOIN sys.dm_db_missing_index_details d
ON p2.index_handle = d.index_handle
WHERE p2.avg_total_user_cost - ISNULL(p1.avg_total_user_cost, 0) > 0
OR p2.avg_user_impact - ISNULL(p1.avg_user_impact, 0) > 0
OR p2.user_seeks - ISNULL(p1.user_seeks, 0) > 0
OR p2.user_scans - ISNULL(p1.user_scans, 0) > 0
ORDER BY [Total Cost] DESC
SELECT
p2.object_name, p2.counter_name, p2.instance_name
, ISNULL(p1.cntr_value, 0) AS InitialValue
, p2.cntr_value AS FinalValue
, p2.cntr_value - ISNULL(p1.cntr_value, 0) AS Change
, (p2.cntr_value - ISNULL(p1.cntr_value, 0)) * 100 / p1.cntr_value
AS [% Change]
FROM #PreWorkOSSnapShot p1
RIGHT OUTER JOIN
#PostWorkOSSnapShot p2 ON p2.object_name =
ISNULL(p1.object_name, p2.object_name)
AND p2.counter_name = ISNULL(p1.counter_name, p2.counter_name)
AND p2.instance_name = ISNULL(p1.instance_name, p2.instance_name)
WHERE p2.cntr_value - ISNULL(p1.cntr_value, 0) > 0
AND ISNULL(p1.cntr_value, 0) != 0
ORDER BY [% Change] DESC, Change DESC
DROP TABLE #PreWorkQuerySnapShot
DROP TABLE #PostWorkQuerySnapShot
DROP TABLE #PostWorkWaitStats
DROP TABLE #PreWorkWaitStats
DROP TABLE #PreWorkOSSnapShot
DROP TABLE #PostWorkOSSnapShot
DROP TABLE #PreWorkMissingIndexes
DROP TABLE #PostWorkMissingIndexes
27. What queries are running now
SELECT
es.session_id, es.host_name, es.login_name
, er.status, DB_NAME(database_id) AS DatabaseName
, SUBSTRING (qt.text,(er.statement_start_offset/2) + 1,
((CASE WHEN er.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE er.statement_end_offset
END - er.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, es.program_name, er.start_time, qp.query_plan
, er.wait_type, er.total_elapsed_time, er.cpu_time, er.logical_reads
, er.blocking_session_id, er.open_transaction_count, er.last_wait_type
, er.percent_complete
FROM sys.dm_exec_requests AS er
INNER JOIN sys.dm_exec_sessions AS es ON es.session_id = er.session_id
CROSS APPLY sys.dm_exec_sql_text( er.sql_handle) AS qt
CROSS APPLY sys.dm_exec_query_plan(er.plan_handle) qp
WHERE es.is_user_process=1
AND es.session_Id NOT IN (@@SPID)
ORDER BY es.session_id

28. Determining your most-recompiled queries

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
qs.plan_generation_num
, qs.total_elapsed_time
, qs.execution_count
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query], DB_NAME(qt.dbid) AS DatabaseName
, qs.creation_time
, qs.last_execution_time
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY plan_generation_num DESC

Operating system DMVs

29.  SQL script shown here will identify the top 20 causes of waiting on your server instance

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
wait_type, wait_time_ms, signal_wait_time_ms
, wait_time_ms - signal_wait_time_ms AS RealWait
, CONVERT(DECIMAL(12,2), wait_time_ms * 100.0 / SUM(wait_time_ms) OVER())
AS [% Waiting]
, CONVERT(DECIMAL(12,2), (wait_time_ms - signal_wait_time_ms) * 100.0
/ SUM(wait_time_ms) OVER()) AS [% RealWait]
FROM sys.dm_os_wait_stats
WHERE wait_type NOT LIKE '%SLEEP%'
AND wait_type != 'WAITFOR'
ORDER BYwait_time_ms DESC

30. Script will identify the waits that occur over a given 10-minute period, ordered by RealWait

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT wait_type, waiting_tasks_count
, wait_time_ms, max_wait_time_ms, signal_wait_time_ms
INTO #PreWorkWaitStats
FROM sys.dm_os_wait_stats
WAITFOR DELAY '00:10:00'
SELECT wait_type, waiting_tasks_count
, wait_time_ms, max_wait_time_ms, signal_wait_time_ms
INTO #PostWorkWaitStats
FROM sys.dm_os_wait_stats
SELECT
p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) AS wait_time_ms
, p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0)
AS signal_wait_time_ms
, ((p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0))
- (p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0)))
AS RealWait
, p2.wait_type
FROM #PreWorkWaitStats p1
RIGHT OUTER JOIN
#PostWorkWaitStats p2 ON p2.wait_type = ISNULL(p1.wait_type, p2.wait_type)
WHERE p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) > 0
AND p2.wait_type NOT LIKE '%SLEEP%'
AND p2.wait_type != 'WAITFOR'
ORDER BY RealWait DESC
DROP TABLE #PostWorkWaitStats
DROP TABLE #PreWorkWaitStats

31. Why your queries are waiting

--ThisRoutineIdentifier
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
sql_handle, plan_handle, total_elapsed_time, total_worker_time
, total_logical_reads, total_logical_writes, total_clr_time
, execution_count, statement_start_offset, statement_end_offset
INTO #PreWorkQuerySnapShot
FROM sys.dm_exec_query_stats
SELECT
wait_type, waiting_tasks_count
, wait_time_ms, max_wait_time_ms, signal_wait_time_ms
INTO #PreWorkWaitStats
FROM sys.dm_os_wait_stats
WAITFOR DELAY '00:05:00'
SELECT
wait_type, waiting_tasks_count, wait_time_ms
, max_wait_time_ms, signal_wait_time_ms
INTO #PostWorkWaitStats
FROM sys.dm_os_wait_stats
SELECT
sql_handle, plan_handle, total_elapsed_time, total_worker_time
, total_logical_reads, total_logical_writes, total_clr_time
, execution_count, statement_start_offset, statement_end_offset
INTO #PostWorkQuerySnapShot
FROM sys.dm_exec_query_stats
SELECT
p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) AS wait_time_ms
, p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0)
AS signal_wait_time_ms
, ((p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0))
– (p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0)))
AS RealWait
, p2.wait_type
FROM #PreWorkWaitStats p1
RIGHT OUTER JOIN
#PostWorkWaitStats p2 ON p2.wait_type = ISNULL(p1.wait_type, p2.wait_type)
WHERE p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) > 0
AND p2.wait_type NOT LIKE '%SLEEP%'
AND p2.wait_type != 'WAITFOR'
ORDER BY RealWait DESC
SELECT
p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0) AS [Duration]
, p2.total_worker_time - ISNULL(p1.total_worker_time, 0) AS [Time on CPU]
, (p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0)) -
(p2.total_worker_time - ISNULL(p1.total_worker_time, 0))
AS [Time blocked]
, p2.total_logical_reads - ISNULL(p1.total_logical_reads, 0) AS [Reads]
, p2.total_logical_writes - ISNULL(p1.total_logical_writes, 0)
AS [Writes]
, p2.total_clr_time - ISNULL(p1.total_clr_time, 0) AS [CLR time]
, p2.execution_count - ISNULL(p1.execution_count, 0) AS [Executions]
, SUBSTRING (qt.text,p2.statement_start_offset/2 + 1,
((CASE WHEN p2.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE p2.statement_end_offset
END - p2.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
FROM #PreWorkQuerySnapShot p1
RIGHT OUTER JOIN
#PostWorkQuerySnapShot p2 ON p2.sql_handle =
ISNULL(p1.sql_handle, p2.sql_handle)
AND p2.plan_handle = ISNULL(p1.plan_handle, p2.plan_handle)
AND p2.statement_start_offset =
ISNULL(p1.statement_start_offset, p2.statement_start_offset)
AND p2.statement_end_offset =
ISNULL(p1.statement_end_offset, p2.statement_end_offset)
CROSS APPLY sys.dm_exec_sql_text(p2.sql_handle) as qt
WHERE p2.execution_count != ISNULL(p1.execution_count, 0)
AND qt.text NOT LIKE '--ThisRoutineIdentifier%'
ORDER BY [Time blocked] DESC
DROP TABLE #PostWorkWaitStats
DROP TABLE #PreWorkWaitStats
DROP TABLE #PostWorkQuerySnapShot
DROP TABLE #PreWorkQuerySnapShot

32. What’s blocking my SQL query?

SELECT
Blocking.session_id as BlockingSessionId
, Sess.login_name AS BlockingUser
, BlockingSQL.text AS BlockingSQL
, Waits.wait_type WhyBlocked
, Blocked.session_id AS BlockedSessionId
, USER_NAME(Blocked.user_id) AS BlockedUser
, BlockedSQL.text AS BlockedSQL
, DB_NAME(Blocked.database_id) AS DatabaseName
FROM sys.dm_exec_connections AS Blocking
INNER JOIN sys.dm_exec_requests AS Blocked
ON Blocking.session_id = Blocked.blocking_session_id
INNER JOIN sys.dm_os_waiting_tasks AS Waits
ON Blocked.session_id = Waits.session_id
RIGHT OUTER JOIN sys.dm_exec_sessions Sess
ON Blocking.session_id = sess.session_id
CROSS APPLY sys.dm_exec_sql_text(Blocking.most_recent_sql_handle)
AS BlockingSQL
CROSS APPLY sys.dm_exec_sql_text(Blocked.sql_handle) AS BlockedSQL
ORDER BY BlockingSessionId, BlockedSessionId

33.  Script here will show how the performance counters change over the given time interval.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
[object_name], [counter_name], [instance_name]
, [cntr_value], [cntr_type]
INTO #PreWorkOSSnapShot
FROM sys.dm_os_performance_counters
WAITFOR DELAY '00:05:00'
SELECT
[object_name], [counter_name], [instance_name]
, [cntr_value], [cntr_type]
INTO #PostWorkOSSnapShot
FROM sys.dm_os_performance_counters
SELECT
p2.object_name, p2.counter_name, p2.instance_name
, ISNULL(p1.cntr_value, 0) AS InitialValue
, p2.cntr_value AS FinalValue
, p2.cntr_value - ISNULL(p1.cntr_value, 0) AS Change
, (p2.cntr_value - ISNULL(p1.cntr_value, 0)) * 100 / p1.cntr_value
AS [% Change]
FROM #PreWorkOSSnapShot p1
RIGHT OUTER JOIN
#PostWorkOSSnapShot p2 ON p2.object_name =
ISNULL(p1.object_name, p2.object_name)
AND p2.counter_name = ISNULL(p1.counter_name, p2.counter_name)
AND p2.instance_name = ISNULL(p1.instance_name, p2.instance_name)
WHERE p2.cntr_value - ISNULL(p1.cntr_value, 0) > 0
AND ISNULL(p1.cntr_value, 0) != 0
ORDER BY [% Change] DESC, Change DESC
DROP TABLE #PreWorkOSSnapShot
DROP TABLE #PostWorkOSSnapShot

34. Changes in performance counters and wait states

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
[object_name], [counter_name], [instance_name]
, [cntr_value], [cntr_type]
INTO #PreWorkOSSnapShot
FROM sys.dm_os_performance_counters
SELECT
wait_type, waiting_tasks_count
, wait_time_ms, max_wait_time_ms, signal_wait_time_ms
INTO #PreWorkWaitStats
FROM sys.dm_os_wait_stats
WAITFOR DELAY '00:05:00'
SELECT
wait_type, waiting_tasks_count, wait_time_ms
, max_wait_time_ms, signal_wait_time_ms
INTO #PostWorkWaitStats
FROM sys.dm_os_wait_stats
SELECT
[object_name], [counter_name], [instance_name]
, [cntr_value], [cntr_type]
INTO #PostWorkOSSnapShot
FROM sys.dm_os_performance_counters
SELECT
p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) AS wait_time_ms
, p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0)
AS signal_wait_time_ms
, ((p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0)) –
(p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0)))
AS RealWait
, p2.wait_type
FROM #PreWorkWaitStats p1
RIGHT OUTER JOIN
#PostWorkWaitStats p2 ON p2.wait_type = ISNULL(p1.wait_type, p2.wait_type)
WHERE p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) > 0
AND p2.wait_type NOT LIKE '%SLEEP%'
AND p2.wait_type != 'WAITFOR'
ORDER BY RealWait DESC
SELECT
p2.object_name, p2.counter_name, p2.instance_name
, ISNULL(p1.cntr_value, 0) AS InitialValue
, p2.cntr_value AS FinalValue
, p2.cntr_value - ISNULL(p1.cntr_value, 0) AS Change
, (p2.cntr_value - ISNULL(p1.cntr_value, 0)) * 100 / p1.cntr_value
AS [% Change]
FROM #PreWorkOSSnapShot p1
RIGHT OUTER JOIN
#PostWorkOSSnapShot p2 ON p2.object_name =ISNULL(p1.object_name, p2.object_name)
AND p2.counter_name = ISNULL(p1.counter_name, p2.counter_name)
AND p2.instance_name = ISNULL(p1.instance_name, p2.instance_name)
WHERE p2.cntr_value - ISNULL(p1.cntr_value, 0) > 0
AND ISNULL(p1.cntr_value, 0) != 0
ORDER BY [% Change] DESC, Change DESC
DROP TABLE #PostWorkWaitStats
DROP TABLE #PreWorkWaitStats
DROP TABLE #PreWorkOSSnapShot
DROP TABLE #PostWorkOSSnapShot

35. Changes in performance counters and wait states

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

SELECT sql_handle
    ,plan_handle
    ,total_elapsed_time
    ,total_worker_time
    ,total_logical_reads
    ,total_logical_writes
    ,total_clr_time
    ,execution_count
    ,statement_start_offset
    ,statement_end_offset
INTO #PreWorkQuerySnapShot
FROM sys.dm_exec_query_stats

SELECT [object_name]
    ,[counter_name]
    ,[instance_name]
    ,[cntr_value]
    ,[cntr_type]
INTO #PreWorkOSSnapShot
FROM sys.dm_os_performance_counters

SELECT wait_type
    ,waiting_tasks_count
    ,wait_time_ms
    ,max_wait_time_ms
    ,signal_wait_time_ms
INTO #PreWorkWaitStats
FROM sys.dm_os_wait_stats

WAITFOR DELAY '00:05:00'

SELECT wait_type
    ,waiting_tasks_count
    ,wait_time_ms
    ,max_wait_time_ms
    ,signal_wait_time_ms
INTO #PostWorkWaitStats
FROM sys.dm_os_wait_stats

SELECT [object_name]
    ,[counter_name]
    ,[instance_name]
    ,[cntr_value]
    ,[cntr_type]
INTO #PostWorkOSSnapShot
FROM sys.dm_os_performance_counters

SELECT sql_handle
    ,plan_handle
    ,total_elapsed_time
    ,total_worker_time
    ,total_logical_reads
    ,total_logical_writes
    ,total_clr_time
    ,execution_count
    ,statement_start_offset
    ,statement_end_offset
INTO #PostWorkQuerySnapShot
FROM sys.dm_exec_query_stats

SELECT p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0) AS [Duration]
    ,p2.total_worker_time - ISNULL(p1.total_worker_time, 0) AS [Time on CPU]
    ,(p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0)) - (p2.total_worker_time - ISNULL(p1.total_worker_time, 0)) AS [Time blocked]
    ,p2.total_logical_reads - ISNULL(p1.total_logical_reads, 0) AS [Reads]
    ,p2.total_logical_writes - ISNULL(p1.total_logical_writes, 0) AS [Writes]
    ,p2.total_clr_time - ISNULL(p1.total_clr_time, 0) AS [CLR time]
    ,p2.execution_count - ISNULL(p1.execution_count, 0) AS [Executions]
    ,SUBSTRING(qt.TEXT, p2.statement_start_offset / 2 + 1, (
            (
                CASE 
                    WHEN p2.statement_end_offset = - 1
                        THEN LEN(CONVERT(NVARCHAR(MAX), qt.TEXT)) * 2
                    ELSE p2.statement_end_offset
                    END - p2.statement_start_offset
                ) / 2
            ) + 1) AS [Individual Query]
    ,qt.TEXT AS [Parent Query]
    ,DB_NAME(qt.dbid) AS DatabaseName
FROM #PreWorkQuerySnapShot p1
RIGHT OUTER JOIN #PostWorkQuerySnapShot p2 ON p2.sql_handle = ISNULL(p1.sql_handle, p2.sql_handle)
    AND p2.plan_handle = ISNULL(p1.plan_handle, p2.plan_handle)
    AND p2.statement_start_offset = ISNULL(p1.statement_start_offset, p2.statement_start_offset)
    AND p2.statement_end_offset = ISNULL(p1.statement_end_offset, p2.statement_end_offset)
CROSS APPLY sys.dm_exec_sql_text(p2.sql_handle) AS qt
WHERE p2.execution_count != ISNULL(p1.execution_count, 0)
    AND qt.TEXT NOT LIKE '--ThisRoutineIdentifier%'
ORDER BY [Duration] DESC

SELECT p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) AS wait_time_ms
    ,p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0) AS signal_wait_time_ms
    ,((p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0))) AS wait_time_ms
    ,(p2.signal_wait_time_ms - ISNULL(p1.signal_wait_time_ms, 0)) AS RealWait
    ,p2.wait_type
FROM #PreWorkWaitStats p1
RIGHT OUTER JOIN #PostWorkWaitStats p2 ON p2.wait_type = ISNULL(p1.wait_type, p2.wait_type)
WHERE p2.wait_time_ms - ISNULL(p1.wait_time_ms, 0) > 0
    AND p2.wait_type NOT LIKE '%SLEEP%'
    AND p2.wait_type != 'WAITFOR'
ORDER BY RealWait DESC

SELECT p2.object_name
    ,p2.counter_name
    ,p2.instance_name
    ,ISNULL(p1.cntr_value, 0) AS InitialValue
    ,p2.cntr_value AS FinalValue
    ,p2.cntr_value - ISNULL(p1.cntr_value, 0) AS Change
    ,(p2.cntr_value - ISNULL(p1.cntr_value, 0)) * 100 / p1.cntr_value AS [% Change]
FROM #PreWorkOSSnapShot p1
RIGHT OUTER JOIN #PostWorkOSSnapShot p2 ON p2.object_name = ISNULL(p1.object_name, p2.object_name)
    AND p2.counter_name = ISNULL(p1.counter_name, p2.counter_name)
    AND p2.instance_name = ISNULL(p1.instance_name, p2.instance_name)
WHERE p2.cntr_value - ISNULL(p1.cntr_value, 0) > 0
    AND ISNULL(p1.cntr_value, 0) != 0
ORDER BY [% Change] DESC
    ,Change DESC

DROP TABLE #PreWorkQuerySnapShot
DROP TABLE #PostWorkQuerySnapShot
DROP TABLE #PostWorkWaitStats
DROP TABLE #PreWorkWaitStats
DROP TABLE #PreWorkOSSnapShot
DROP TABLE #PostWorkOSSnapShot

Resolving transaction issues

36. Observing the current locks

SELECT DB_NAME(resource_database_id) AS DatabaseName, request_session_id
, resource_type, request_status, request_mode
FROM sys.dm_tran_locks
WHERE request_session_id !=@@spid
ORDER BY request_session_id

 

image

 

image

 

37. Information contained in sessions, connections, and requests
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT *
FROM sys.dm_exec_sessions s
LEFT OUTER JOIN sys.dm_exec_connections c
ON s.session_id = c.session_id
LEFT OUTER JOIN sys.dm_exec_requests r
ON c.connection_id = r.connection_id
WHERE s.session_id > 50
38. How to discover which locks are currently held
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT DB_NAME(resource_database_id) AS DatabaseName
, request_session_id
, resource_type
, CASE
WHEN resource_type = 'OBJECT'
THEN OBJECT_NAME(resource_associated_entity_id)
WHEN resource_type IN ('KEY', 'PAGE', 'RID')
THEN (SELECT OBJECT_NAME(OBJECT_ID)
FROM sys.partitions p
WHERE p.hobt_id = l.resource_associated_entity_id)
END AS resource_type_name
, request_status
, request_mode
FROM sys.dm_tran_locks l
WHERE request_session_id !=@@spid
ORDER BY request_session_id
39. How to identify contended resources
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
tl1.resource_type,
DB_NAME(tl1.resource_database_id) AS DatabaseName,
tl1.resource_associated_entity_id,
tl1.request_session_id,
tl1.request_mode,
tl1.request_status
, CASE
WHEN tl1.resource_type = 'OBJECT'
THEN OBJECT_NAME(tl1.resource_associated_entity_id)
WHEN tl1.resource_type IN ('KEY', 'PAGE', 'RID')
THEN (SELECT OBJECT_NAME(OBJECT_ID)
FROM sys.partitions s
WHERE s.hobt_id = tl1.resource_associated_entity_id)
END AS resource_type_name
FROM sys.dm_tran_locks as tl1
INNER JOIN sys.dm_tran_locks as tl2
ON tl1.resource_associated_entity_id = tl2.resource_associated_entity_id
AND tl1.request_status <> tl2.request_status
AND (tl1.resource_description = tl2.resource_description
OR (tl1.resource_description IS NULL
AND tl2.resource_description IS NULL))
ORDER BY tl1.resource_associated_entity_id, tl1.request_status
40. How to identify contended resources, including SQL query details
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
tl1.resource_type
, DB_NAME(tl1.resource_database_id) AS DatabaseName
, tl1.resource_associated_entity_id
, tl1.request_session_id
, tl1.request_mode
, tl1.request_status
, CASE
WHEN tl1.resource_type = 'OBJECT'
THEN OBJECT_NAME(tl1.resource_associated_entity_id)
WHEN tl1.resource_type IN ('KEY', 'PAGE', 'RID')
THEN (SELECT OBJECT_NAME(OBJECT_ID)
FROM sys.partitions s
WHERE s.hobt_id = tl1.resource_associated_entity_id)
END AS resource_type_name
, t.text AS [Parent Query]
, SUBSTRING (t.text,(r.statement_start_offset/2) + 1,
((CASE WHEN r.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), t.text)) * 2
ELSE r.statement_end_offset
END - r.statement_start_offset)/2) + 1) AS [Individual Query]
FROM sys.dm_tran_locks as tl1
INNER JOIN sys.dm_tran_locks as tl2
ON tl1.resource_associated_entity_id = tl2.resource_associated_entity_id
AND tl1.request_status <> tl2.request_status
AND (tl1.resource_description = tl2.resource_description
OR (tl1.resource_description IS NULL
AND tl2.resource_description IS NULL))
INNER JOIN sys.dm_exec_connections c
ON tl1.request_session_id = c.most_recent_session_id
CROSS APPLY sys.dm_exec_sql_text(c.most_recent_sql_handle) t
LEFT OUTER JOIN sys.dm_exec_requests r ON c.connection_id = r.connection_id
ORDER BY tl1.resource_associated_entity_id, tl1.request_status

41. How to find an idle session with an open transaction

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT es.session_id, es.login_name, es.host_name, est.text
, cn.last_read, cn.last_write, es.program_name
FROM sys.dm_exec_sessions es
INNER JOIN sys.dm_tran_session_transactions st
ON es.session_id = st.session_id
INNER JOIN sys.dm_exec_connections cn
ON es.session_id = cn.session_id
CROSS APPLY sys.dm_exec_sql_text(cn.most_recent_sql_handle) est
LEFT OUTER JOIN sys.dm_exec_requests er
ON st.session_id = er.session_id
AND er.session_id IS NULL
42. Amount of space (total, used, and free) in tempdb
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT SUM(user_object_reserved_page_count
+ internal_object_reserved_page_count
+ version_store_reserved_page_count
+ mixed_extent_page_count
+ unallocated_extent_page_count) * (8.0/1024.0)
AS [TotalSizeOfTempDB(MB)]
, SUM(user_object_reserved_page_count
+ internal_object_reserved_page_count
+ version_store_reserved_page_count
+ mixed_extent_page_count) * (8.0/1024.0)
AS [UsedSpace (MB)]
, SUM(unallocated_extent_page_count * (8.0/1024.0))
AS [FreeSpace (MB)]
FROM sys.dm_db_file_space_usage

43. Total amount of space (data, log, and log used) by database

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT instance_name
, counter_name
, cntr_value / 1024.0 AS [Size(MB)]
FROM sys.dm_os_performance_counters
WHERE object_name = 'SQLServer:Databases'
AND counter_name IN (
'Data File(s) Size (KB)'
, 'Log File(s) Size (KB)'
, 'Log File(s) Used Size (KB)')
ORDER BY instance_name, counter_name

44. Tempdb total space usage by object type

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
SUM (user_object_reserved_page_count) * (8.0/1024.0)
AS [User Objects (MB)],
SUM (internal_object_reserved_page_count) * (8.0/1024.0)
AS [Internal Objects (MB)],
SUM (version_store_reserved_page_count) * (8.0/1024.0)
AS [Version Store (MB)],
SUM (mixed_extent_page_count)* (8.0/1024.0)
AS [Mixed Extent (MB)],
SUM (unallocated_extent_page_count)* (8.0/1024.0)
AS [Unallocated (MB)]
FROM sys.dm_db_file_space_usage

45.  tempdb Space usage by session

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT es.session_id
, ec.connection_id
, es.login_name
, es.host_name
, st.text
, su.user_objects_alloc_page_count
, su.user_objects_dealloc_page_count
, su.internal_objects_alloc_page_count
, su.internal_objects_dealloc_page_count
, ec.last_read
, ec.last_write
, es.program_name
FROM sys.dm_db_session_space_usage su
INNER JOIN sys.dm_exec_sessions es
ON su.session_id = es.session_id
LEFT OUTER JOIN sys.dm_exec_connections ec
ON su.session_id = ec.most_recent_session_id
OUTER APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) st
WHERE su.session_id > 50

46. Space used and reclaimed in tempdb for completed batches

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT CAST(SUM(su.user_objects_alloc_page_count
+ su.internal_objects_alloc_page_count) * (8.0/1024.0)
AS DECIMAL(20,3)) AS [SpaceUsed(MB)]
, CAST(SUM(su.user_objects_alloc_page_count
– su.user_objects_dealloc_page_count
+ su.internal_objects_alloc_page_count
– su.internal_objects_dealloc_page_count)
* (8.0/1024.0) AS DECIMAL(20,3)) AS [SpaceStillUsed(MB)]
, su.session_id
, ec.connection_id
, es.host_name
, st.text AS [LastQuery]
, ec.last_read
, ec.last_write
, es.program_name
FROM sys.dm_db_session_space_usage su
INNER JOIN sys.dm_exec_sessions es ON su.session_id = es.session_id
LEFT OUTER JOIN sys.dm_exec_connections ec
ON su.session_id = ec.most_recent_session_id
OUTER APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) st
WHERE su.session_id > 50
GROUP BY su.session_id, ec.connection_id, es.login_name, es.host_name
, st.text, ec.last_read, ec.last_write, es.program_name
ORDER BY [SpaceStillUsed(MB)] DESC

47. Space used by running SQL queries

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT es.session_id
, ec.connection_id
, es.login_name
, es.host_name
, st.text
, tu.user_objects_alloc_page_count
, tu.user_objects_dealloc_page_count
, tu.internal_objects_alloc_page_count
, tu.internal_objects_dealloc_page_count
, ec.last_read
, ec.last_write
, es.program_name
FROM sys.dm_db_task_space_usage tu
INNER JOIN sys.dm_exec_sessions es ON tu.session_id = es.session_id
LEFT OUTER JOIN sys.dm_exec_connections ec
ON tu.session_id = ec.most_recent_session_id
OUTER APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) st
WHERE tu.session_id > 50

48. Space used and not reclaimed by active SQL queries

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT SUM(ts.user_objects_alloc_page_count
+ ts.internal_objects_alloc_page_count)
* (8.0/1024.0) AS [SpaceUsed(MB)]
, SUM(ts.user_objects_alloc_page_count
– ts.user_objects_dealloc_page_count
+ ts.internal_objects_alloc_page_count
– ts.internal_objects_dealloc_page_count)
* (8.0/1024.0) AS [SpaceStillUsed(MB)]
, ts.session_id
, ec.connection_id
, es.login_name
, es.host_name
, st.text AS [Parent Query]
, SUBSTRING (st.text,(er.statement_start_offset/2) + 1,
((CASE WHEN er.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), st.text)) * 2
ELSE er.statement_end_offset
END - er.statement_start_offset)/2) + 1) AS [Current Query]
, ec.last_read
, ec.last_write
, es.program_name
FROM sys.dm_db_task_space_usage ts
INNER JOIN sys.dm_exec_sessions es ON ts.session_id = es.session_id
LEFT OUTER JOIN sys.dm_exec_connections ec
ON ts.session_id = ec.most_recent_session_id
OUTER APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) st
LEFT OUTER JOIN sys.dm_exec_requests er ON ts.session_id = er.session_id
WHERE ts.session_id > 50
GROUP BY ts.session_id, ec.connection_id, es.login_name, es.host_name
, st.text, ec.last_read, ec.last_write, es.program_name
, SUBSTRING (st.text,(er.statement_start_offset/2) + 1,
((CASE WHEN er.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), st.text)) * 2
ELSE er.statement_end_offset
END - er.statement_start_offset)/2) + 1)
ORDER BY [SpaceStillUsed(MB)] DESC

 

49. Indexes under row-locking pressure
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
x.name AS SchemaName
, OBJECT_NAME(s.object_id) AS TableName
, i.name AS IndexName
, s.row_lock_wait_in_ms
, s.row_lock_wait_count
FROM sys.dm_db_index_operational_stats(db_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE s.row_lock_wait_in_ms > 0
AND o.is_ms_shipped = 0
ORDER BY s.row_lock_wait_in_ms DESC

50. Indexes with the most lock escalations

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
x.name AS SchemaName
, OBJECT_NAME (s.object_id) AS TableName
, i.name AS IndexName
, s.index_lock_promotion_count
FROM sys.dm_db_index_operational_stats(db_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE s.index_lock_promotion_count > 0
AND o.is_ms_shipped = 0
ORDER BY s.index_lock_promotion_count DESC

51. Indexes with the most unsuccessful lock escalations

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
x.name AS SchemaName
, OBJECT_NAME (s.object_id) AS TableName
, i.name AS IndexName
, s.index_lock_promotion_attempt_count – s.index_lock_promotion_count
AS UnsuccessfulIndexLockPromotions
FROM sys.dm_db_index_operational_stats(db_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE (s.index_lock_promotion_attempt_count - index_lock_promotion_count)>0
AND o.is_ms_shipped = 0
ORDER BY UnsuccessfulIndexLockPromotions DESC

52. Indexes with the most page splits

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
x.name AS SchemaName
, object_name(s.object_id) AS TableName
, i.name AS IndexName
, s.leaf_allocation_count
, s.nonleaf_allocation_count
FROM sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE s.leaf_allocation_count > 0
AND o.is_ms_shipped = 0
ORDER BY s.leaf_allocation_count DESC
53. Indexes with the most latch contention
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
x.name AS SchemaName
, OBJECT_NAME(s.object_id) AS TableName
, i.name AS IndexName
, s.page_latch_wait_in_ms
, s.page_latch_wait_count
FROM sys.dm_db_index_operational_stats(db_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE s.page_latch_wait_in_ms > 0
AND o.is_ms_shipped = 0
ORDER BY s.page_latch_wait_in_ms DESC

54. Indexes with the most page I/O-latch contention

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

SELECT x.NAME AS SchemaName
    ,OBJECT_NAME(s.object_id) AS TableName
    ,i.NAME AS IndexName
    ,s.row_lock_wait_in_ms
INTO #PreWorkIndexCount
FROM sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
    AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE s.row_lock_wait_in_ms > 0
    AND o.is_ms_shipped = 0

SELECT sql_handle
    ,plan_handle
    ,total_elapsed_time
    ,total_worker_time
    ,total_logical_reads
    ,total_logical_writes
    ,total_clr_time
    ,execution_count
    ,statement_start_offset
    ,statement_end_offset
INTO #PreWorkQuerySnapShot
FROM sys.dm_exec_query_stats

WAITFOR DELAY '01:00:00'

SELECT x.NAME AS SchemaName
    ,OBJECT_NAME(s.object_id) AS TableName
    ,i.NAME AS IndexName
    ,s.row_lock_wait_in_ms
INTO #PostWorkIndexCount
FROM sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
    AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE s.row_lock_wait_in_ms > 0
    AND o.is_ms_shipped = 0

SELECT sql_handle
    ,plan_handle
    ,total_elapsed_time
    ,total_worker_time
    ,total_logical_reads
    ,total_logical_writes
    ,total_clr_time
    ,execution_count
    ,statement_start_offset
    ,statement_end_offset
INTO #PostWorkQuerySnapShot
FROM sys.dm_exec_query_stats

SELECT p2.SchemaName
    ,p2.TableName
    ,p2.IndexName
    ,p2.row_lock_wait_in_ms - ISNULL(p1.row_lock_wait_in_ms, 0) AS RowLockWaitTimeDelta_ms
FROM #PreWorkIndexCount p1
RIGHT OUTER JOIN #PostWorkIndexCount p2 ON p2.SchemaName = ISNULL(p1.SchemaName, p2.SchemaName)
    AND p2.TableName = ISNULL(p1.TableName, p2.TableName)
    AND p2.IndexName = ISNULL(p1.IndexName, p2.IndexName)
WHERE p2.row_lock_wait_in_ms - ISNULL(p1.row_lock_wait_in_ms, 0) > 0
ORDER BY RowLockWaitTimeDelta_ms DESC

SELECT p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0) AS [Duration]
    ,p2.total_worker_time - ISNULL(p1.total_worker_time, 0) AS [Time on CPU]
    ,(p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0))
    ,(p2.total_worker_time - ISNULL(p1.total_worker_time, 0)) AS [Time blocked]
    ,p2.total_logical_reads - ISNULL(p1.total_logical_reads, 0) AS [Reads]
    ,p2.total_logical_writes - ISNULL(p1.total_logical_writes, 0) AS [Writes]
    ,p2.total_clr_time - ISNULL(p1.total_clr_time, 0) AS [CLR time]
    ,p2.execution_count - ISNULL(p1.execution_count, 0) AS [Executions]
    ,SUBSTRING(qt.TEXT, p2.statement_start_offset / 2 + 1, (
            (
                CASE 
                    WHEN p2.statement_end_offset = - 1
                        THEN LEN(CONVERT(NVARCHAR(MAX), qt.TEXT)) * 2
                    ELSE p2.statement_end_offset
                    END - p2.statement_start_offset
                ) / 2
            ) + 1) AS [Individual Query]
    ,qt.TEXT AS [Parent Query]
    ,DB_NAME(qt.dbid) AS DatabaseName
FROM #PreWorkQuerySnapShot p1
RIGHT OUTER JOIN #PostWorkQuerySnapShot p2 ON p2.sql_handle = ISNULL(p1.sql_handle, p2.sql_handle)
    AND p2.plan_handle = ISNULL(p1.plan_handle, p2.plan_handle)
    AND p2.statement_start_offset = ISNULL(p1.statement_start_offset, p2.statement_start_offset)
    AND p2.statement_end_offset = ISNULL(p1.statement_end_offset, p2.statement_end_offset)
CROSS APPLY sys.dm_exec_sql_text(p2.sql_handle) AS qt
WHERE p2.execution_count != ISNULL(p1.execution_count, 0)
    AND qt.TEXT NOT LIKE '--ThisRoutineIdentifier%'
ORDER BY [Duration] DESC

DROP TABLE #PreWorkIndexCount
DROP TABLE #PostWorkIndexCount
DROP TABLE #PreWorkQuerySnapShot
DROP TABLE #PostWorkQuerySnapShot

55. Determining how many rows are inserted/deleted/updated/selected

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
sql_handle, plan_handle, total_elapsed_time, total_worker_time
, total_logical_reads, total_logical_writes, total_clr_time
, execution_count, statement_start_offset, statement_end_offset
INTO #PreWorkQuerySnapShot
FROM sys.dm_exec_query_stats
SELECT x.name AS SchemaName
, OBJECT_NAME (s.object_id) AS TableName
, i.name AS IndexName
, s.leaf_delete_count
, s.leaf_ghost_count
, s.leaf_insert_count
, s.leaf_update_count
, s.range_scan_count
, s.singleton_lookup_count
INTO #PreWorkIndexCount
FROM sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE o.is_ms_shipped = 0
WAITFOR DELAY '01:00:00'
SELECT
sql_handle, plan_handle, total_elapsed_time, total_worker_time
, total_logical_reads, total_logical_writes, total_clr_time
, execution_count, statement_start_offset, statement_end_offset
INTO #PostWorkQuerySnapShot
FROM sys.dm_exec_query_stats
SELECT x.name AS SchemaName
, OBJECT_NAME (s.object_id) AS TableName
, i.name AS IndexName
, s.leaf_delete_count
, s.leaf_ghost_count
, s.leaf_insert_count
, s.leaf_update_count
, s.range_scan_count
, s.singleton_lookup_count
INTO #PostWorkIndexCount
FROM sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) s
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.indexes i ON s.index_id = i.index_id
AND i.object_id = o.object_id
INNER JOIN sys.schemas x ON x.schema_id = o.schema_id
WHERE o.is_ms_shipped = 0
SELECT
p2.SchemaName
, p2.TableName
, p2.IndexName
, p2.leaf_delete_count - ISNULL(p1.leaf_delete_count, 0)
AS leaf_delete_countDelta
, p2.leaf_ghost_count - ISNULL(p1.leaf_ghost_count, 0)
AS leaf_ghost_countDelta
, p2.leaf_insert_count - ISNULL(p1.leaf_insert_count, 0)
AS leaf_insert_countDelta
, p2.leaf_update_count - ISNULL(p1.leaf_update_count, 0)
AS leaf_update_countDelta
, p2.range_scan_count - ISNULL(p1.range_scan_count, 0)
AS range_scan_countDelta
, p2.singleton_lookup_count - ISNULL(p1.singleton_lookup_count, 0)
AS singleton_lookup_countDelta
FROM #PreWorkIndexCount p1
RIGHT OUTER JOIN
#PostWorkIndexCount p2 ON p2.SchemaName =
ISNULL(p1.SchemaName, p2.SchemaName)
AND p2.TableName = ISNULL(p1.TableName, p2.TableName)
AND p2.IndexName = ISNULL(p1.IndexName, p2.IndexName)
WHERE p2.leaf_delete_count - ISNULL(p1.leaf_delete_count, 0) > 0
OR p2.leaf_ghost_count - ISNULL(p1.leaf_ghost_count, 0) > 0
OR p2.leaf_insert_count - ISNULL(p1.leaf_insert_count, 0) > 0
OR p2.leaf_update_count - ISNULL(p1.leaf_update_count, 0) > 0
OR p2.range_scan_count - ISNULL(p1.range_scan_count, 0) > 0
OR p2.singleton_lookup_count - ISNULL(p1.singleton_lookup_count, 0) > 0
ORDER BY leaf_delete_countDelta DESC
SELECT
p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0) AS [Duration]
, p2.total_worker_time - ISNULL(p1.total_worker_time, 0) AS [Time on CPU]
, (p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0)) –
(p2.total_worker_time - ISNULL(p1.total_worker_time, 0))
AS [Time blocked]
, p2.total_logical_reads - ISNULL(p1.total_logical_reads, 0) AS [Reads]
, p2.total_logical_writes - ISNULL(p1.total_logical_writes, 0)
AS [Writes]
, p2.total_clr_time - ISNULL(p1.total_clr_time, 0) AS [CLR time]
, p2.execution_count - ISNULL(p1.execution_count, 0) AS [Executions]
, SUBSTRING (qt.text,p2.statement_start_offset/2 + 1,
((CASE WHEN p2.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE p2.statement_end_offset
END - p2.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
FROM #PreWorkQuerySnapShot p1
RIGHT OUTER JOIN
#PostWorkQuerySnapShot p2 ON p2.sql_handle =
ISNULL(p1.sql_handle, p2.sql_handle)
AND p2.plan_handle = ISNULL(p1.plan_handle, p2.plan_handle)
AND p2.statement_start_offset =
ISNULL(p1.statement_start_offset, p2.statement_start_offset)
AND p2.statement_end_offset =
ISNULL(p1.statement_end_offset, p2.statement_end_offset)
CROSS APPLY sys.dm_exec_sql_text(p2.sql_handle) as qt
WHERE p2.execution_count != ISNULL(p1.execution_count, 0)
ORDER BY [Duration] DESC
DROP TABLE #PreWorkIndexCount
DROP TABLE #PostWorkIndexCount
DROP TABLE #PreWorkQuerySnapShot
DROP TABLE #PostWorkQuerySnapShot
56. Rebuilding and reorganizing fragmented indexes
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
CREATE TABLE #FragmentedIndexes(
DatabaseName SYSNAME
, SchemaName SYSNAME
, TableName SYSNAME
, IndexName SYSNAME
, [Fragmentation%] FLOAT)
INSERT INTO #FragmentedIndexes
SELECT
DB_NAME(DB_ID()) AS DatabaseName
, ss.name AS SchemaName
, OBJECT_NAME (s.object_id) AS TableName
, i.name AS IndexName
, s.avg_fragmentation_in_percent AS [Fragmentation%]
FROM sys.dm_db_index_physical_stats(db_id(),NULL, NULL, NULL, 'SAMPLED') s
INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id]
AND s.index_id = i.index_id
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.schemas ss ON ss.[schema_id] = o.[schema_id]
WHERE s.database_id = DB_ID()
AND i.index_id != 0
AND s.record_count > 0
AND o.is_ms_shipped = 0
DECLARE @RebuildIndexesSQL NVARCHAR(MAX)
SET @RebuildIndexesSQL = ''
SELECT
@RebuildIndexesSQL = @RebuildIndexesSQL +
CASE
WHEN [Fragmentation%] > 30
THEN CHAR(10) + 'ALTER INDEX ' + QUOTENAME(IndexName) + ' ON '
+ QUOTENAME(SchemaName) + '.'
+ QUOTENAME(TableName) + ' REBUILD;'
WHEN [Fragmentation%] > 10
THEN CHAR(10) + 'ALTER INDEX ' + QUOTENAME(IndexName) + ' ON '
+ QUOTENAME(SchemaName) + '.'
+ QUOTENAME(TableName) + ' REORGANIZE;'
END
FROM #FragmentedIndexes
WHERE [Fragmentation%] > 10
DECLARE @StartOffset INT
DECLARE @Length INT
SET @StartOffset = 0
SET @Length = 4000
WHILE (@StartOffset < LEN(@RebuildIndexesSQL))
BEGIN
PRINT SUBSTRING(@RebuildIndexesSQL, @StartOffset, @Length)
SET @StartOffset = @StartOffset + @Length
END
PRINT SUBSTRING(@RebuildIndexesSQL, @StartOffset, @Length)
EXECUTE sp_executesql @RebuildIndexesSQL
DROP TABLE #FragmentedIndexes

 

57. Rebuild/reorganize for all databases on a given server
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
CREATE TABLE #FragmentedIndexes(
DatabaseName SYSNAME
, SchemaName SYSNAME
, TableName SYSNAME
, IndexName SYSNAME
, [Fragmentation%] FLOAT)
EXEC sp_MSForEachDB 'USE [?]; INSERT INTO #FragmentedIndexes SELECT DB_NAME(DB_ID()) AS DatabaseName , ss.name AS SchemaName , OBJECT_NAME (s.object_id) AS TableName , i.name AS IndexName , s.avg_fragmentation_in_percent AS [Fragmentation%] FROM sys.dm_db_index_physical_stats(db_id(),NULL, NULL, NULL, ''SAMPLED'') s INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id INNER JOIN sys.objects o ON s.object_id = o.object_id INNER JOIN sys.schemas ss ON ss.[schema_id] = o.[schema_id] WHERE s.database_id = DB_ID() AND i.index_id != 0 AND s.record_count > 0 AND o.is_ms_shipped = 0 ;'
DECLARE @RebuildIndexesSQL NVARCHAR(MAX)
SET @RebuildIndexesSQL = ''
SELECT
@RebuildIndexesSQL = @RebuildIndexesSQL +
CASE
WHEN [Fragmentation%] > 30
THEN CHAR(10) + 'ALTER INDEX ' + QUOTENAME(IndexName) + ' ON '
+ QUOTENAME(DatabaseName) + '.'+ QUOTENAME(SchemaName) + '.'
+ QUOTENAME(TableName) + ' REBUILD;'
WHEN [Fragmentation%] > 10
THEN CHAR(10) + 'ALTER INDEX ' + QUOTENAME(IndexName) + ' ON '
+ QUOTENAME(DatabaseName) + '.'+ QUOTENAME(SchemaName) + '.'
+ QUOTENAME(TableName) + ' REORGANIZE;'
END
FROM #FragmentedIndexes
WHERE [Fragmentation%] > 10
DECLARE @StartOffset INT
DECLARE @Length INT
SET @StartOffset = 0
SET @Length = 4000
WHILE (@StartOffset < LEN(@RebuildIndexesSQL))
BEGIN
PRINT SUBSTRING(@RebuildIndexesSQL, @StartOffset, @Length)
SET @StartOffset = @StartOffset + @Length
END
PRINT SUBSTRING(@RebuildIndexesSQL, @StartOffset, @Length)
EXECUTE sp_executesql @RebuildIndexesSQL
DROP TABLE #FragmentedIndexes

58. Intelligently update statistics—simple version

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
ss.name AS SchemaName
, st.name AS TableName
, si.name AS IndexName
, si.type_desc AS IndexType
, STATS_DATE(si.object_id,si.index_id) AS StatsLastTaken
, ssi.rowcnt
, ssi.rowmodctr
INTO #IndexUsage
FROM sys.indexes si
INNER JOIN sys.sysindexes ssi ON si.object_id = ssi.id
AND si.name = ssi.name
INNER JOIN sys.tables st ON st.[object_id] = si.[object_id]
INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]
WHERE st.is_ms_shipped = 0
AND si.index_id != 0
AND ssi.rowcnt > 100
AND ssi.rowmodctr > 0
DECLARE @UpdateStatisticsSQL NVARCHAR(MAX)
SET @UpdateStatisticsSQL = ''
SELECT
@UpdateStatisticsSQL = @UpdateStatisticsSQL
+ CHAR(10) + 'UPDATE STATISTICS '
+ QUOTENAME(SchemaName) + '.' + QUOTENAME(TableName)
+ ' ' + QUOTENAME(IndexName) + ' WITH SAMPLE '
+ CASE
WHEN rowcnt < 500000 THEN '100 PERCENT'
WHEN rowcnt < 1000000 THEN '50 PERCENT'
WHEN rowcnt < 5000000 THEN '25 PERCENT'
WHEN rowcnt < 10000000 THEN '10 PERCENT'
WHEN rowcnt < 50000000 THEN '2 PERCENT'
WHEN rowcnt < 100000000 THEN '1 PERCENT'
ELSE '3000000 ROWS '
END
+ '-- ' + CAST(rowcnt AS VARCHAR(22)) + ' rows'
FROM #IndexUsage
DECLARE @StartOffset INT
DECLARE @Length INT
SET @StartOffset = 0
SET @Length = 4000
WHILE (@StartOffset < LEN(@UpdateStatisticsSQL))
BEGIN
PRINT SUBSTRING(@UpdateStatisticsSQL, @StartOffset, @Length)
SET @StartOffset = @StartOffset + @Length
END
PRINT SUBSTRING(@UpdateStatisticsSQL, @StartOffset, @Length)
EXECUTE sp_executesql @UpdateStatisticsSQL
DROP TABLE #IndexUsage

59. Estimating when a job will finish

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT r.percent_complete
, DATEDIFF(MINUTE, start_time, GETDATE()) AS Age
, DATEADD(MINUTE, DATEDIFF(MINUTE, start_time, GETDATE()) /
percent_complete * 100, start_time) AS EstimatedEndTime
, t.Text AS ParentQuery
, SUBSTRING (t.text,(r.statement_start_offset/2) + 1,
((CASE WHEN r.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), t.text)) * 2
ELSE r.statement_end_offset
END - r.statement_start_offset)/2) + 1) AS IndividualQuery
, start_time
, DB_NAME(Database_Id) AS DatabaseName
, Status
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(sql_handle) t
WHERE session_id > 50
AND percent_complete > 0
ORDER BY percent_complete DESC

60. Who’s doing what and when?

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
CREATE TABLE dbo.WhatsGoingOnHistory(
[Runtime] [DateTime],
[session_id] [smallint] NOT NULL,
[login_name] [varchar](128) NOT NULL,
[host_name] [varchar](128) NULL,
[DBName] [varchar](128) NULL,
[Individual Query] [varchar](max) NULL,
[Parent Query] [varchar](200) NULL,
[status] [varchar](30) NULL,
[start_time] [datetime] NULL,
[wait_type] [varchar](60) NULL,
[program_name] [varchar](128) NULL
)
GO
CREATE UNIQUE NONCLUSTERED INDEX
[NONCLST_WhatsGoingOnHistory] ON [dbo].[WhatsGoingOnHistory]
([Runtime] ASC, [session_id] ASC)
GO
INSERT INTO dbo.WhatsGoingOnHistory
SELECT
GETDATE()
, s.session_id
, s.login_name
, s.host_name
, DB_NAME(r.database_id) AS DBName
, SUBSTRING (t.text,(r.statement_start_offset/2) + 1,
((CASE WHEN r.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), t.text)) * 2
ELSE r.statement_end_offset
END - r.statement_start_offset)/2) + 1) AS [Individual Query]
, SUBSTRING(text, 1, 200) AS [Parent Query]
, r.status
, r.start_time
, r.wait_type
, s.program_name
FROM sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_connections c ON s.session_id = c.session_id
INNER JOIN sys.dm_exec_requests r ON c.connection_id = r.connection_id
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
WHERE s.session_id > 50
AND r.session_id != @@spid
WAITFOR DELAY '00:01:00'
GO 1440 -- 60 * 24 (one day)

61. Determining where your query spends its time

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
sql_handle, plan_handle, total_elapsed_time, total_worker_time
, total_logical_reads, total_logical_writes, total_clr_time
, execution_count, statement_start_offset, statement_end_offset
INTO #PreWorkQuerySnapShot
FROM sys.dm_exec_query_stats
EXEC MO.PNLYearToDate_v01iws
@pControlOrgIds = '537'
, @pCOBStart = '27 may 2009'
, @pCOBEnd = '27 may 2009'
SELECT
sql_handle, plan_handle, total_elapsed_time, total_worker_time
, total_logical_reads, total_logical_writes, total_clr_time
, execution_count, statement_start_offset, statement_end_offset,
last_execution_time
INTO #PostWorkQuerySnapShot
FROM sys.dm_exec_query_stats
SELECT
p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0) AS [Duration]
, p2.total_worker_time - ISNULL(p1.total_worker_time, 0) AS [Time on CPU]
, (p2.total_elapsed_time - ISNULL(p1.total_elapsed_time, 0)) -
(p2.total_worker_time - ISNULL(p1.total_worker_time, 0))
AS [Time waiting]
, p2.total_logical_reads - ISNULL(p1.total_logical_reads, 0) AS [Reads]
, p2.total_logical_writes - ISNULL(p1.total_logical_writes, 0)
AS [Writes]
, p2.total_clr_time - ISNULL(p1.total_clr_time, 0) AS [CLR time]
, p2.execution_count - ISNULL(p1.execution_count, 0) AS [Executions]
, p2.last_execution_time
, SUBSTRING (qt.text,p2.statement_start_offset/2 + 1,
((CASE WHEN p2.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE p2.statement_end_offset
END - p2.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
FROM #PreWorkQuerySnapShot p1
RIGHT OUTER JOIN
#PostWorkQuerySnapShot p2
ON p2.sql_handle = ISNULL(p1.sql_handle, p2.sql_handle)
AND p2.plan_handle = ISNULL(p1.plan_handle, p2.plan_handle)
AND p2.statement_start_offset =
ISNULL(p1.statement_start_offset, p2.statement_start_offset)
AND p2.statement_end_offset =
ISNULL(p1.statement_end_offset, p2.statement_end_offset)
CROSS APPLY sys.dm_exec_sql_text(p2.sql_handle) as qt
WHERE p2.execution_count != ISNULL(p1.execution_count, 0)
AND qt.text LIKE '%PNLYearToDate_v01iws %'
ORDER BY [Parent Query], p2.statement_start_offset
DROP TABLE #PreWorkQuerySnapShot
DROP TABLE #PostWorkQuerySnapShot

62. Memory used per database

SET TRAN ISOLATION LEVEL READ UNCOMMITTED
SELECT
ISNULL(DB_NAME(database_id), 'ResourceDb') AS DatabaseName
, CAST(COUNT(row_count) * 8.0 / (1024.0) AS DECIMAL(28,2))
AS [Size (MB)]
FROM sys.dm_os_buffer_descriptors
GROUP BY database_id
ORDER BY DatabaseName

63. Memory used by objects in the current database

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT
OBJECT_NAME(p.[object_id]) AS [TableName]
, (COUNT(*) * 8) / 1024 AS [Buffer size(MB)]
, ISNULL(i.name, '-- HEAP --') AS ObjectName
, COUNT(*) AS NumberOf8KPages
FROM sys.allocation_units AS a
INNER JOIN sys.dm_os_buffer_descriptors AS b
ON a.allocation_unit_id = b.allocation_unit_id
INNER JOIN sys.partitions AS p
INNER JOIN sys.indexes i ON p.index_id = i.index_id
AND p.[object_id] = i.[object_id]
ON a.container_id = p.hobt_id
WHERE b.database_id = DB_ID()
AND p.[object_id] > 100
GROUP BY p.[object_id], i.name
ORDER BY NumberOf8KPages DESC

64. I/O stalls at the database level

SET TRAN ISOLATION LEVEL READ UNCOMMITTED
SELECT DB_NAME(database_id) AS [DatabaseName]
, SUM(CAST(io_stall / 1000.0 AS DECIMAL(20,2))) AS [IO stall (secs)]
, SUM(CAST(num_of_bytes_read / 1024.0 / 1024.0 AS DECIMAL(20,2)))
AS [IO read (MB)]
, SUM(CAST(num_of_bytes_written / 1024.0 / 1024.0 AS DECIMAL(20,2)))
AS [IO written (MB)]
, SUM(CAST((num_of_bytes_read + num_of_bytes_written)
/ 1024.0 / 1024.0 AS DECIMAL(20,2))) AS [TotalIO (MB)]
FROM sys.dm_io_virtual_file_stats(NULL, NULL)
GROUP BY database_id
ORDER BY [IO stall (secs)] DESC

65. I/O waits at the file level

SET TRAN ISOLATION LEVEL READ UNCOMMITTED
SELECT DB_NAME(database_id) AS [DatabaseName]
, file_id
, SUM(CAST(io_stall / 1000.0 AS DECIMAL(20,2))) AS [IO stall (secs)]
, SUM(CAST(num_of_bytes_read / 1024.0 / 1024.0 AS DECIMAL(20,2)))
AS [IO read (MB)]
, SUM(CAST(num_of_bytes_written / 1024.0 / 1024.0 AS DECIMAL(20,2)))
AS [IO written (MB)], SUM(CAST((num_of_bytes_read + num_of_bytes_written)
/ 1024.0 / 1024.0 AS DECIMAL(20,2))) AS [TotalIO (MB)]
FROM sys.dm_io_virtual_file_stats(NULL, NULL)
GROUP BY database_id, file_id
ORDER BY [IO stall (secs)] DESC

66. Average read/write times per file, per database

SET TRAN ISOLATION LEVEL READ UNCOMMITTED
SELECT DB_NAME(database_id) AS DatabaseName
, file_id
, io_stall_read_ms / num_of_reads AS 'Average read time'
, io_stall_write_ms / num_of_writes AS 'Average write time'
FROM sys.dm_io_virtual_file_stats(NULL, NULL)
WHERE num_of_reads > 0 and num_of_writes > 0
ORDER BY DatabaseName

 

New layer…

tempdb shrink error : File ID of database was skipped because the file does not have enough free space to reclaim

Posted by Sagar Patil

 

 

SELECT    TOP 10
session_id,
database_id,
user_objects_alloc_page_count + internal_objects_alloc_page_count / 129
AS tempdb_usage_MB
FROM sys.dm_db_session_space_usage
ORDER BY user_objects_alloc_page_count + internal_objects_alloc_page_count DESC;

SQL Server logs, changing level of logging

Posted by Sagar Patil

To view SQL Server error log from SQL Server Enterprise Manager
– Expand a server group, and then expand a server.
– Expand Management, and then expand SQL Server Logs.
– Click the SQL Server Log to view it. Error log information appears in the details pane

One can also execute “exec xp_readerrorlog”

SQL server logs are generally located at “C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG”.

The log files are rolled over on each sql server instance restart, or when running DBCC ERRORLOG statement. sql server will only retain max 6 old file. This could be changed byselecting configure

How do Add detailed logging

Edit NSService.exe.config File

The default installation folder is C:\Program Files\Microsoft SQL Server\90\NotificationServices\n.n.nnn\bin.
Open the NSservice.exe.config file.

<?xml version=”1.0″ encoding=”UTF-8″?>
<!–
The default logging levels for all components is Warning

Off = 0 < Error = 1 < Warning = 2 < Info = 3 < Verbose = 4

Change the values of the value attribute to change the logging
level.

Setting the logging level enables all log events which are less
than or equal to the log level setting
–>
<configuration>
<system.diagnostics>
<switches>
<add name=”LogAdministrative” value=”2″/>
<add name=”LogService” value=”2″/>
<add name=”LogEventProvider” value=”2″/>
<add name=”LogEventCollector” value=”2″/>
<add name=”LogGenerator” value=”2″/>
<add name=”LogDistributor” value=”2″/>
<add name=”LogVacuumer” value=”2″/>
<add name=”LogOther” value=”2″/>
<add name=”LogPerformanceMonitor” value=”2″/>
</switches>
</system.diagnostics>
</configuration>

Initially, each logging option is set to 2, which turns on logging for error and warning messages only.
To apply the logging changes, save the file and restart the instance of Notification Services.

 
Element name Logging category 
LogAdministrative SQL Server Management Studio and nscontrol utility events
LogService NS$instanceName Microsoft Windows service events
LogEventProvider Event provider events
LogEventCollector EventCollector object events
LogGenerator Generator events
LogDistributor Distributor events
LogVacuumer Vacuumer events
LogOther Performance monitor events
LogPerformanceMonitor Events for all other components, such as the standard content formatter and delivery protocols

Logging Level Values : Logging levels determine which types of events are written to the Application log. You can control the logging level by setting the value attribute to one of the following values.

0 Off
1 Error messages only
2 Error and warning messages
3 Error, warning, and informational messages
4 Verbose logging, which writes all messages to the Application log

How to truncate SQL Server Transaction Log

Posted by Sagar Patil

Step 1.  Find Size of Database Log files

SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files
WHERE DB_NAME(database_id) like ‘%AdventureWorks%’
and name like ‘%Log%’
order by SizeMB desc

Step 2.  See if databases have active transactions . You won’t able to truncate log easily with active transactions on database.

select name,log_reuse_wait_desc from sys.databases

name    log_reuse_wait_desc
master    NOTHING
tempdb    ACTIVE_TRANSACTION
model    NOTHING

Step 3. Locate active transaction details . If needed kill sessions returned by dbcc opentran.

DBCC OPENTRAN

Transaction information for database ‘Test’.

Oldest active transaction:
SPID (server process ID): 79
UID (user ID) : -1
Name          : BULK INSERT
LSN           : (63361:51900:6)
Start time    : Aug 16 2013  7:42:53:783AM
SID           : 0x978700b4443f8e489fa13f97b3f375fc
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Step 4.  TSQL to Shrink Transaction log from all databases. To truncate log from specific databases alter NOT IN condition.

DECLARE @dbname VARCHAR(100)
DECLARE @dbid INT
DECLARE curDatabases CURSOR FOR
SELECT name, database_id
FROM sys.databases
WHERE name NOT IN (‘master’, ‘model’, ‘msdb’)
OPEN curDatabases
FETCH NEXT FROM curDatabases INTO
@dbname,
@dbid
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN
DECLARE @dbfilename VARCHAR(100)
PRINT ‘USE MASTER’
PRINT ‘GO’
PRINT ‘ALTER DATABASE [‘ + @dbname + ‘] SET RECOVERY SIMPLE’
PRINT ‘GO’
PRINT ‘USE [‘ + @dbname + ‘]’
PRINT ‘GO’
Select @dbfilename = name
FROM sys.master_files
WHERE type = 1
AND database_id = @dbid
PRINT ‘DBCC SHRINKFILE (‘ + @dbfilename + ‘, 1024)’
PRINT ‘GO’
PRINT ”
END
FETCH NEXT FROM curDatabases INTO
@dbname,
@dbid
END
CLOSE curDatabases
DEALLOCATE curDatabases

SQL Server Permissions of Fixed Database Roles

Posted by Sagar Patil

Fixed database roles can be mapped to the more detailed permissions that are included in SQL Server.

The following table describes the mapping of the fixed database roles to permissions.

Fixed database role Database-level permission Server-level permission
db_accessadmin Granted: ALTER ANY USER, CREATE SCHEMAGranted with GRANT option: CONNECT Granted: VIEW ANY DATABASE
db_backupoperator Granted: BACKUP DATABASE, BACKUP LOG, CHECKPOINT Granted: VIEW ANY DATABASE
db_datareader Granted: SELECT Granted: VIEW ANY DATABASE
db_datawriter Granted: DELETE, INSERT, UPDATE Granted: VIEW ANY DATABASE
db_ddladmin Granted: ALTER ANY ASSEMBLY, ALTER ANY ASYMMETRIC KEY, ALTER ANY CERTIFICATE, ALTER ANY CONTRACT, ALTER ANY DATABASE DDL TRIGGER, ALTER ANY DATABASE EVENT, NOTIFICATION, ALTER ANY DATASPACE, ALTER ANY FULLTEXT CATALOG, ALTER ANY MESSAGE TYPE, ALTER ANY REMOTE SERVICE BINDING, ALTER ANY ROUTE, ALTER ANY SCHEMA, ALTER ANY SERVICE, ALTER ANY SYMMETRIC KEY, CHECKPOINT, CREATE AGGREGATE, CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE QUEUE, CREATE RULE, CREATE SYNONYM, CREATE TABLE, CREATE TYPE, CREATE VIEW, CREATE XML SCHEMA COLLECTION, REFERENCES Granted: VIEW ANY DATABASE
db_denydatareader Denied: SELECT Granted: VIEW ANY DATABASE
db_denydatawriter Denied: DELETE, INSERT, UPDATE
db_owner Granted with GRANT option: CONTROL Granted: VIEW ANY DATABASE
db_securityadmin Granted: ALTER ANY APPLICATION ROLE, ALTER ANY ROLE, CREATE SCHEMA, VIEW DEFINITION Granted: VIEW ANY DATABASE
dbm_monitor Granted: VIEW most recent status in Database Mirroring Monitor

Important noteImportant
The dbm_monitor fixed database role is created in the msdb database when the first database is registered in Database Mirroring Monitor. The new dbm_monitor role has no members until a system administrator assigns users to the role.
Granted: VIEW ANY DATABASE

Fixed database roles are not equivalent to their database-level permission. For example, the db_owner fixed database role has the CONTROL DATABASE permission. But granting the CONTROL DATABASE permission does not make a user a member of the db_owner fixed database role. Members of the db_owner fixed database role are identified as the dbo user in the databases, but users with the CONTROL DATABASE permission, are not.

Installing, Upgrading PERL Packages

Posted by Sagar Patil

perl GetMail_.pl returned
Can’t locate Email/MIME.pm in @INC (@INC contains: C:/Perl64/site/lib C:/Perl64/
lib .) at GetMail_.pl line 6.
BEGIN failed–compilation aborted at GetMail_.pl line 6.

PPM is the package management utility for ActivePerl. It simplifies the task of locating, installing, upgrading and removing Perl packages. The PPM client accesses PPM repositories (collections of packages) on the internet or on a local network. It is also used to update previously installed packages with the latest versions and to remove unused packages from your system. PPM is installed automatically with ActivePerl

To launch PPM’s graphical user interface, run ppm without any command line arguments: ppm

Search for Packages you wish to install

Once required packages selected click on RUN icon to Install selected packages

 

 

 

SQL Server Error Messages

Posted by Sagar Patil

SQL Server comes with pre-defined error messages that can be raised when a particular condition occurs. Built-in error messages are stored in the sysmessages table of the master database. Depending on the severity level, error messages can be returned to the offending connection or could be recorded in the SQL Server error log. The severity level determines the type and seriousness of the problem that caused the error. The following table summarizes various error severity levels:

Error Severity Level Explanation
0 These are messages informing you of the completion status of a statement you executed, such as: “new role added”. Level 0 also includes informational warning messages such as: “Warning: Null value is eliminated by an aggregate or other SET operation.” Messages of level 0 are returned as information, not errors.
10 Informational message that is usually caused by incorrect information entered by the user. For example: “No rows were updated or deleted.”
11 through 16 These are errors that are caused by users and can be corrected by users. For example:

  • “User name ‘JohnDoe’ not found.”
  • “Cannot create an index on ‘authors21’, because this table does not exist in database ‘pubs’.”
  • “The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.”
  • “CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID 1. Most significant primary key is ‘121’.”
17 This severity indicates insufficient resources for performing a requested operation. For example, SQL Server might have run out of locks, as in the following: “Sort failed. Out of space or locks in database ‘pubs’.” Be sure to adjust SQL Server configuration accordingly to avoid such errors in the future.
18 Non-fatal internal error has occurred. This means that SQL Server encountered errors in the internal code, but the connection is maintained. For example: “The server encountered a stack overflow during compile time.” or “Transaction manager has canceled the distributed transaction.” If your users continuously encounter such errors and you cannot resolve the problem you should ensure that you have applied the latest SQL Server service pack. If that does not help, contact Microsoft’s technical support.
19 This severity level indicates lack of resources, but not due to SQL Server configuration. Some of these are fairly harmless, for instance: “The log file for database ‘test’ is full. Back up the transaction log for the database to free up some log space.” This simply means that you need to backup the log or increase the size of the transaction log files. Other errors of severity 19 could be quite serious, for instance: “There is no room to hold the buffer resource lock %S_BUF in SDES %S_SDES. Contact Technical Support.” If you encounter such errors, contact Microsoft’s technical support ASAP. The Transact-SQL batch that caused error with severity 19 will terminate, but the connection will remain open. Error levels with severity 19 or higher are written to the SQL Server error log automatically.
20 Fatal error on the current connection. This means the session that encountered the error will log the error and then will be disconnected from SQL Server. For example: “Row compare failure.” or “Sort cannot be reconciled with transaction log.” Be sure to look up the resolution for such errors at support.microsoft.com – many of these are well documented. Severity level of 20 usually does not mean that database is damaged.
21 An error has occurred which affects all databases on the current instance of SQL Server. For example: “Error writing audit trace. SQL Server is shutting down.” or “Could not open tempdb. Cannot continue.” A severity level of 21 usually does not mean any database is damaged. You might have to review the contents of system tables and the configuration options to resolve errors of this severity.
22 Not encountered very often, this severity level is usually associated with hardware (disk or cache) errors. Level 22 indicates that table integrity is suspect. For example: “Could not locate row in sysobjects for system catalog ‘%.*ls’ in database ‘XYZ’. This system catalog should exist in all databases. Run DBCC CHECKTABLE on sysobjects in this database.” You should run DBCC CHECKTABLE on a particular table or all tables in the database. The safest bet is to execute DBCC CHECKDB to examine the integrity of all tables. Executing DBCC CHECKDB with one of the REPAIR options can usually fix the problem. If the problem is related to index pages, drop and recreate the affected index. You might also have to restore the database from backup.
23 Severity of 23 indicates that the integrity of the entire database is affected and the database will be marked suspect. For example: “Possible schema corruption. Run DBCC CHECKCATALOG.” These types of errors are usually caused by hardware issues. More than likely you will have to restore the database from backup. Run DBCC CHECKDB after restoring to ensure that the database is not corrupt.
24 Severity of 24 usually spells hardware failure; the error will look similar to the following: “I/O error %ls detected during %S_MSG at offset %#016I64x in file ‘%ls’.” You might have to reload the database from backup. Be sure to execute DBCC CHECKDB to check database consistency first. You might also wish to contact your hardware vendor.

SQL server SQLDIAG Utility

Posted by Sagar Patil

SQLDIAG.exe executable can be found in the SQL Server installation directory under the BINN folder. SQLDIAG records the SQL Server configuration, contents of the error logs (current, as well as archived logs), and Windows configuration including registry keys. If SQL Server is running when SQLDIAG is executed, it will record the output of the following routines:

  1. sp_configure
  2. sp_who
  3. sp_lock
  4. sp_helpdb
  5. xp_msver
  6. sp_helpextendedproc
  7. SELECT * FROM sysprocesses
  8. DBCC INPUTBUFFER for all active sessions
  9. SQLDIAG will also find the roots of any blocking issues
  10. Last 100 queries and exceptions

If SQL Server isn’t running, then SQLDIAG won’t collect SQL Server connection information and INPUTBUFFERs.SQLDIAG output will include the following information about Windows:

  1. Operating system report
  2. Hard drive report
  3. Memory report
  4. Processor report
  5. Services report
  6. Program groups report
  7. Startup programs report
  8. Environment variables report
  9. Software environment report
  10. Network connections report
  11. IRQ’s report
  12. Problem devices report

Generating the output of SQLDIAG can be helpful during disaster recovery since it contains the report of Windows / SQL Server configuration. Notethat you must run the SQLDIAG utility directly on the server; you cannot run it on a remote client.The full syntax of SQLDIAG is as follows:

>──sqldiag─┬───────────────────┬─┬───────────────────────────────────┬──>
           ├─ -? ──────────────┤ └─┬──────────────┬─┬──────────────┬─┘
           └─ -I instance_name─┘   └─ -U login_id─┘ ├─ -P password─┤
                                                    └─ -E ─────────┘

>─┬─────────────────┬─┬──────┬─┬──────┬─┬──────┬────────────────────────>
  └─ -O output_file─┘ └─ -X ─┘ └─ -M ─┘ └─ -C ─┘

Parameters are summarized in the following table:

 

Parameter Explanation
-? Returns SQLDIAG syntax
-I Instance name. If not specified SQLDIAG will attempt connecting to the default instance
-U SQL Server login or Windows login used for connecting
-P Password of the login specified with –U
-E This option advises SQLDIAG to use trusted connections, the password will not be provided. –E is mutually exclusive with –P
-O The name of the output file. Default name is SQLDIAG.txt. The file is stored in the SQL Server installation directory in LOG folder
-X Advises SQLDIAG to exclude error logs from the output. This is useful if the error logs are too large to be included. You can read error logs directly from ERRORLOG files.
-M Advises SQLDIAG to include DBCC STACKDUMP in its output
-C Advises SQLDIAG to include cluster information in the output

For example, the following command generates the SQLDIAG report on the default instance of SQL Server:

sqldiag