SqlDev PowerShell 102: Working with Data and Variables

In my previous post, SqlDev PowerShell 101: Getting Started with PowerShell, I walked through how to configure and use PowerShell to run basic commands. This post will cover working with data and variables in PowerShell. Specifically, I’m going to demonstrate how to connect to a database (i.e. SQL Server) using Invoke-Sqlcmd, retrieve data, store the data in a variable, and work with that variable.

Note: Invoke-Sqlcmd does require the use of the sqlps module. If you don’t have it installed yet, check here for more information.

From a new PowerShell window, write the following command:

Invoke-Sqlcmd -Query "SELECT DB_NAME();"

PowerShell Working with Data: PowerShell_102_01

Tip: You can auto-complete “Invoke-Sqlcmd” after typing “Invoke-S” and hitting TAB on your keyboard.

What’s happening here? Well, because no database instance was specified, the cmdlet is connecting to the database on the local computer and running the query specified. “Column1″ is returned just as it would be in SQL Server Management Studio (SSMS), because no column name was specified. You can see this by running:

Invoke-Sqlcmd -Query "SELECT DB_NAME() AS MyDatabaseName;"

PowerShell Working with Data: PowerShell_102_02

But what if you don’t have a local database instance, or you want to connect to another database? No problem. Just add the -ServerInstance parameter:

Invoke-Sqlcmd -Query "SELECT DB_NAME() AS MyDatabaseName;" -ServerInstance "."

“.” refers to the local, default SQL Server instance. If you want to connect to a named or remote instance, just replace this value with your server name.

Tip: Hit the “Up” arrow to retrieve the previous results, then just edit the command.

PowerShell Working with Data: PowerShell_102_03

So that’s all pretty easy so far, but honestly, if all I needed to do was run a simple query, I’d do it in SSMS. No, the real power of PowerShell is the .NET Framework. So let’s see this in action by assigning our results to a variable. A variable is defined using the dollar sign ($) and a name; defining a type is optional. Let’s do that now.

$DbName = Invoke-Sqlcmd -Query "SELECT DB_NAME() AS MyDatabaseName;" -ServerInstance "."

PowerShell Working with Data: PowerShell_102_04

In order to view the value stored in the variable, just append a new command to the previous line using a semicolon (;) and the variable’s name.

$DbName = Invoke-Sqlcmd -Query "SELECT DB_NAME() AS MyDatabaseName;" -ServerInstance "."; $DbName

PowerShell Working with Data: PowerShell_102_05

Optionally, you could also type the variable’s name and press Enter.

$DbName

PowerShell Working with Data: PowerShell_102_06

Now let’s run a query that returns multiple columns:

$DbDetails = Invoke-Sqlcmd -Query "SELECT database_id, name, state_desc, recovery_model_desc FROM sys.databases WHERE name = 'master';" -ServerInstance "."; $DbDetails

PowerShell Working with Data: PowerShell_102_07

We didn’t define a data type, so let’s see what data type PowerShell assigned for us. To do this, we’ll use the GetType method:

$DbDetails.GetType()

PowerShell Working with Data: PowerShell_102_08

You can see that the variable is defined as a system object; specifically, it’s a data row. That’s important, because it determines how we’ll interact with the variable. For instance, what if we wanted to reference the value stored in the variable, not the variable as an object? You can do this by referencing the column stored in the object directly, i.e.

$DbDetails.name

PowerShell Working with Data: PowerShell_102_09

Now let’s run the same query, but remove the database criteria from the WHERE statement:

$DbDetailsAll = Invoke-Sqlcmd -Query "SELECT database_id, name, state_desc, recovery_model_desc FROM sys.databases;" -ServerInstance "."; $DbDetailsAll

PowerShell Working with Data: PowerShell_102_10

Your results will vary, of course, but the important thing is that you should have more than 1 row returned.

Now let’s return just the database names again:

$DbDetailsAll.name

PowerShell Working with Data: PowerShell_102_11

Hopefully you’re starting to see how retrieving an array of values and accessing just a single attribute could be useful.

Let’s now iterate through our list of databases and perform an action. In this example, we’ll just iterate through each database and retrieve the count of tables. To do this, we’ll need to use a Foreach loop.

Foreach ($DatabaseRow In $DbDetailsAll) {$TableCount = Invoke-Sqlcmd -Query "SELECT COUNT(*) AS TableCount FROM sys.tables;" -ServerInstance "." -Database $DatabaseRow.name; $TableCount }

PowerShell Working with Data: PowerShell_102_12

This is the biggest statement we’ve written yet, so let’s take a minute to break it down.

We’re starting the loop by calling Foreach, then we’re specifying how we’re iterating through the array. As each row in the array comes into focus, we’re storing the row in the $DatabaseRow variable that we’ve just defined. Everything specified inside the curly brackets ({}) is executing for each row in the array. We’re then connecting to the local instance and changing the context of the database to be the value currently stored in $DatabaseRow. The results are stored in a new variable, $TableCount, which is then outputted to the screen.

It’s actually a pretty short piece of code, given all that it’s doing. But it’s difficult to understand which TableCount maps to which table, so let’s add that information to our output:

Foreach ($DatabaseRow In $DbDetailsAll) {$TableCount = Invoke-Sqlcmd -Query "SELECT '$DatabaseRow.name' AS DatabaseName, COUNT(*) AS TableCount FROM sys.tables;" -ServerInstance "." -Database $DatabaseRow.name; $TableCount }

PowerShell Working with Data: PowerShell_102_13

Woah, that’s not what we want! PowerShell is displaying the object type, not the value. In order to actually retrieve the value from an object variable, we’re going to store the database name in a string variable first, then reference that newly-defined variable.

Foreach ($DatabaseRow In $DbDetailsAll) {$DatabaseName = $DatabaseRow.name; $TableCount = Invoke-Sqlcmd -Query "SELECT '$DatabaseName' AS DatabaseName, COUNT(*) AS TableCount FROM sys.tables;" -ServerInstance "." -Database $DatabaseRow.name; $TableCount }

PowerShell Working with Data: PowerShell_102_14

Voila! Now we have a list of databases on a server and the associated count of tables in each database.

If you’re wondering why you’d ever actually use this PowerShell script, well… you probably wouldn’t. But it’s important to have these building blocks before going into more advanced concepts. In my next post, we’ll be creating a config table of SQL Server instances, connecting to each server, retrieving details about missing indexes, and writing those results out to disk. That should be a little more useful. ;)

Until then… happy scripting! :)

Invoke-SqlStmt – PowerShell for SQL Server & Teradata Queries

I’ve been using PowerShell quite a bit the last few months, and I finally feel I’m to the point where I can start sharing some of what I’ve learned.

I started a blog post that walks through using PowerShell for database development and automation, and I quickly realized that a lot of what I want to show is dependent upon the first function I wrote, Invoke-SqlStmt. And by “wrote,” I mean “blatantly borrowed from Chad Miller’s Invoke-SqlCmd2 script;) . Thank you, Chad! The main difference is that my function supports connecting to either Teradata or SQL Server, depending on the parameters passed.

One thing worth mentioning is that I spent a lot of time researching all of the different ways to connect to SQL Server. In the end, I went with ADO.NET, as a lot of what I will be doing is working with data sets (i.e. running a query against ServerA and writing the results to ServerB).

Invoke-SqlStmt

##################################################################################################
#
#   .NAME
#       Invoke-SqlStmt.ps1
#
#   .SYNOPSIS
#       Executes a SQL statement against either a Teradata or SQL Server source using ADO.NET
#
#   .INPUT
#       C - Connection Type; Required; MSSQL or TDATA
#       S - Server Name; Required; server name for any valid SQL Server or data instance
#       Q - Query; Optional; either Q or F is required
#       F - Filename; Optional; either Q or F is required
#       U - Username; Optional; if not supplied, Integrated mode is assumed
#       P - Password; Optional; if not supplied, Integrated mode is assumed
#       T - Connection Timeout; Optional; defaults to 600s but can be overridden
#       AS - Optional; can specify whether DataSet, DataTable, or DataRow will be returned
#
#   .OUTPUTS
#       System.Data.(DataSet|DataTable|DataRow)
#
#   .CREDIT
#       Modified from Chad Miller's Invoke-SqlCmd2 script:
#       http://gallery.technet.microsoft.com/scriptcenter/7985b7ef-ed89-4dfd-b02a-433cc4e30894#content
#
#  --------------------------------------------------------------------------------------------
#
#   .EXAMPLES
#
#   Invoke-SqlStmt -C MSSQL -S "(local)" -Q "SELECT @@SERVERNAME;"
#     This example connects to a named instance of the D Engine on a computer and runs a basic T-SQL Q.
#
#   Invoke-SqlStmt -C MSSQL -S "(local)" -F "C:\myScript.sql" | Out-File -filePath "C:\myOutput.txt"
#     This example reads a file containing T-SQL statements, runs the file, and writes the output to another file.
#
#   Invoke-SqlStmt  -C MSSQL -S "(local)" -Q "PRINT 'hello world'" -Verbose
#     This example uses the PowerShell -Verbose parameter to return the message output of the PRINT command.
#
#   Invoke-SqlStmt  -C TDATA -S "(local)" -Q "SELECT * FROM DBC.Tables;"
#     This example connects to Teradata and executes a query. 
#
#  --------------------------------------------------------------------------------------------
#
#   AUTHOR:     Michelle Ufford
#   CREATED:    2012-12-12
#
#  CHANGE HISTORY:
#  --------------------------------------------------------------------------------------------
#  ChangeDate/Version   Author                  Description
#  --------------------------------------------------------------------------------------------
#  20121212.0           Michelle Ufford         Initial Creation
#  20121218.1           Michelle Ufford         Tweaks
##################################################################################################
 
function Invoke-SqlStmt
{
    [CmdletBinding()]
    param(
    [Parameter(Position=0, Mandatory=$false)] [ValidateSet("MSSQL", "TDATA")] [string]$C="MSSQL",
    [Parameter(Position=1, Mandatory=$true)]  [string]$S,
    [Parameter(Position=2, Mandatory=$false)] [string]$D,
    [Parameter(Position=3, Mandatory=$false)] [string]$Q,
    [Parameter(Position=4, Mandatory=$false)] [ValidateScript({test-path $_})] [string]$F,
    [Parameter(Position=5, Mandatory=$false)] [string]$U,
    [Parameter(Position=6, Mandatory=$false)] [string]$P,
    [Parameter(Position=8, Mandatory=$false)] [Int32]$T=600,
    [Parameter(Position=9, Mandatory=$false)] [ValidateSet("DataSet", "DataTable", "DataRow")] [string]$AS="DataRow"
    )
 
    # Check for input files
    if ($F)
    {
        $filePath = $(resolve-path $F).path
        $Q =  [System.IO.File]::ReadAllText("$filePath")
    }
 
    # ----------------------
    # Teradata Connections
    # ----------------------
    if ($C -eq "TDATA")
    {
        $ConnectionString = "Data Source={0}; User Id={1}; Password={2}; Connection Pooling Timeout={3};" -f $S, $U, $P, $T
 
        Add-Type -AssemblyName System.Data
        $Factory = [System.Data.Common.DbProviderFactories]::GetFactory("Teradata.Client.Provider")
        $Connection = $Factory.CreateConnection()
        $Connection.ConnectionString = $ConnectionString
        $Connection.Open()
        $Command = $Connection.CreateCommand()
        $Command.CommandText = $Q
        $DataSet = New-Object System.Data.DataSet
        $DataAdapter = $Factory.CreateDataAdapter()
        $DataAdapter.SelectCommand = $Command
        $DataAdapter.SelectCommand.CommandTimeout = $T
        [void] $DataAdapter.Fill($DataSet) | Out-Null
        $Connection.Close()
 
        # get a table contained in the DataSet, http://technet.microsoft.com/en-us/library/dd347701.aspx
        switch ($As)
        {
              'DataSet'   { Write-Output ($DataSet) }
              'DataTable' { Write-Output ($DataSet.Tables) }
              'DataRow'   { Write-Output ($DataSet.Tables[0]) }
        }
    }
    else
    # ------------------------
    # SQL Server Connections
    # ------------------------
    {
        $Connection = New-Object System.Data.SqlClient.SQLConnection
 
        if ($U)
        { $ConnectionString = "Server={0};Database={1};User ID={2};Password={3};Trusted_Connection=False;Connect Timeout={4}" -f $S,$D,$U,$P,$T }
        else
        { $ConnectionString = "Server={0};Database={1};Integrated Security=True;Connect Timeout={2}" -f $S,$D,$CT }
 
        $Connection.ConnectionString=$ConnectionString
 
        #Following EventHandler is used for PRINT and RAISERROR T-SQL statements. Executed when -Verbose parameter specified by caller
        if ($PSBoundParameters.Verbose)
        {
            $Connection.FireInfoMessageEventOnUserErrors=$true
            $Handler = [System.Data.SqlClient.SqlInfoMessageEventHandler] {Write-Verbose "$($_)"}
            $Connection.add_InfoMessage($Handler)
        }
 
        $Connection.Open()
        $Command = New-Object System.Data.SqlClient.SqlCommand
        $Command.Connection = $Connection
        $Command.CommandText = $Q
        $Command.CommandTimeout = $T
        $DataSet = New-Object System.Data.DataSet
        $DataAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
        $DataAdapter.SelectCommand = $Command
        [void]$DataAdapter.Fill($DataSet) | Out-Null
        $Connection.Close()
 
        switch ($As)
        {
            'DataSet'   { Write-Output ($DataSet) }
            'DataTable' { Write-Output ($DataSet.Tables) }
            'DataRow'   { Write-Output ($DataSet.Tables[0]) }
        }
    }
}

I plan to have a follow-up post up shortly that will walk you through how to use this script step-by-step.

TVP Permissions

Here’s just a short post to feed the search engine gerbils. So today, I created a stored procedure that accepted a list of ID’s and filtered them. The most efficient way to handle this was using table-valued parameters (TVPs). My code looks similiar to this:

USE AdventureWorks2012;
GO
 
/* Create our new table type */
IF NOT EXISTS(SELECT * FROM sys.types WHERE name = 'CustomerList')
BEGIN
    CREATE TYPE dbo.CustomerList AS TABLE 
    ( 
          CustomerID INT
        , ID         INT
    );
END;
 
IF OBJECTPROPERTY(OBJECT_ID('dbo.CustomerTerritoryFilterGet_sp')
, N'IsProcedure') IS NOT NULL
    DROP PROCEDURE dbo.CustomerTerritoryFilterGet_sp;
GO
 
/* Create a proc to test with */
CREATE PROCEDURE dbo.CustomerTerritoryFilterGet_sp
      @Customers        CustomerList READONLY
    , @TerritoryFilter  INT
 
AS
BEGIN
 
    SELECT DISTINCT sc.CustomerID, c.ID
    FROM Sales.Customer AS sc
    JOIN @Customers AS c
        ON sc.CustomerID = c.CustomerID
    WHERE sc.TerritoryID = @TerritoryFilter;
 
    RETURN 0;
END
GO
 
/* Test our stored procedure */
DECLARE @myList CustomerList;
INSERT INTO @myList 
VALUES (1, 10), (2, 20), (3, 30), (4, 40), (5, 50), (6, 60), (7, 70);
 
EXECUTE dbo.CustomerTerritoryFilterGet_sp
      @Customers        = @myList
    , @TerritoryFilter  = 1;
GO
 
GRANT EXECUTE ON dbo.CustomerTerritoryFilterGet_sp To myApp;
GO

This would typically be sufficient for most stored procedures. However, my app dev was getting the following error:

The EXECUTE permission was denied on the object 'customerList', database 'AdventureWorks2012', schema 'dbo'.

Wait, we need to explicitly grant permissions to the new data type? Okay, that’s easy enough… let’s just run a quick GRANT statement:

GRANT EXECUTE ON dbo.CustomerList To myApp;
Msg 15151, Level 16, State 1, Line 1
Cannot find the object 'CustomerList', because it does not exist or you do not have permission.

Okay, now I’m confused… what am I missing? It turns out, you need to use a slightly different syntax to grant permissions to a new data type. This syntax, which explicitly tells SQL Server that we’re granting permissions on a Type class, works just fine:

GRANT EXECUTE ON TYPE::dbo.CustomerList To myApp;
 
/* Time to clean up! */
-- DROP PROC dbo.CustomerTerritoryFilterGet_sp;
-- DROP TYPE dbo.CustomerList;
Command(s) completed successfully.

You can find the full syntax here on Books Online:
http://msdn.microsoft.com/en-US/library/ms174346(v=SQL.90).aspx

Are You Approaching Your Partition Range Limits?

In my post last week, How To Estimate Data Utilization, I said that it may be my last post for a while. Well… apparently I lied. :)

For those of you who use table partitioning, you know that you need to define a partitioning scheme and function prior to applying partitioning to an index. Personally, I tend to build the function for a couple of years out, and I tend to create them through the end of a calendar year. Now, if I failed to expand a partition range at the end of the year, then come January 1st, all of my data would be written to the same partition. Not the end of the world, no, but it causes all kinds of nasty performance and maintenance issues. Thus, as part of my end-of-year / maternity-leave preparations, I’m in the process of examining all partitioned functions to identify those that need to have their partition ranges expanded. For those interested, here’s the script I used:

IF OBJECT_ID('tempdb..#Results') IS NOT NULL
    DROP TABLE #Results;
 
CREATE TABLE #Results
(
      databaseName  NVARCHAR(128)
    , schemaName    NVARCHAR(128)
    , functionName  NVARCHAR(128)
    , data_space_id INT
    , maxRangeValue SQL_VARIANT
)
 
/* Grab results for each database and store in our temp table.  
   And no, I don't *need* to select from sys.indexes and perform 
   left joins, but I'm overly cautious and want to make sure 
   I'm not accidentally missing any databases. :) */
 
--EXECUTE master.dbo.sp_msforeachdb
EXECUTE sp_foreachdb 'USE ?;
INSERT INTO #Results
SELECT DB_NAME() AS databaseName
    , sps.name AS schemaName
    , spf.name AS functionName
    , sps.data_space_id 
    , MAX(prv.value) AS maxRangeValue
FROM sys.indexes AS i
LEFT JOIN sys.partition_schemes AS sps WITH (NOLOCK)
    ON i.data_space_id = sps.data_space_id
LEFT JOIN sys.partition_functions AS spf WITH (NOLOCK)
    ON sps.function_id = spf.function_id
LEFT JOIN sys.partition_range_values AS prv WITH (NOLOCK)
    ON spf.function_id = prv.function_id
GROUP BY sps.name
    , spf.name
    , sps.data_space_id;';
/*  
    sp_foreachdb was written by SQL MVP Aaron Bertrand and can be downloaded 
    at http://www.mssqltips.com/sqlservertip/2201/making-a-more-reliable-and-flexible-spmsforeachdb/
    Alternatively, you can also use sys.sp_MSforeachdb
*/
 
/* Make sure we're not missing any major databases */
SELECT * FROM sys.databases WHERE name NOT IN (SELECT databaseName FROM #Results);
 
/* Retrieve our results */
SELECT * 
FROM #Results
WHERE schemaName IS NOT NULL
ORDER BY maxRangeValue;

Example Results:

databaseName        schemaName                      functionName                          data_space_id   maxRangeValue
------------------- ------------------------------- ------------------------------------- --------------- -------------------------
HistoricalMart      dailyRangeDate_ps               dailyRangeDate_pf                     65609           2011-12-31 00:00:00.000
AdventureWorks      yearlyRangeSmallDateTime_ps     yearlyRangeSmallDateTime_pf           65605           2012-01-01 00:00:00.000
dbaTools            monthlyRangeDateTime_ps         monthlyRangeDateTime_pf               65604           2012-12-01 00:00:00.000

How To Estimate Data Utilization

Recently, on a conference call presenting data growth rates and database capacity projections, I had a top-line executive ask, “But how much of that data are we actually using today?” The question was met with silence; unless you have rigorous auditing in place — and kudos to you if you do — it’s a difficult question to answer. But it begs the question, is there some way to gleam this information from SQL Server? I think the answer is “yes,” if you make some assumptions and understand what you’re looking at.

SQL Server collects stats about every time an index is used and how it is used (i.e. whether a user seeked or scanned the index, etc.). It also provides a DMV to view these stats: sys.dm_db_index_usage_stats.

This DMV provides a wealth of great information, but to answer our question of “What data is actually being used?”, we have to refine our criteria. Are we talking in terms of table counts or data size? I’d argue that data size is more important than table counts; one unqueried millow-row table is more wasteful than a hundred ten-row tables.

Also, are we looking at indexes or content? From a database perspective, I’m more interested in indexes: how much space are we wasting on unused indexes? To identify this, I need to look at the activity on each individual index.

From a business perspective, I would be more interested in content (i.e. tables): how much business information is being stored that no one is even looking at? To answer this question, I need to roll up all index usage to see if *any* of the indexes on a table were used. Since both were of interest to me, I decided to write queries to answer both questions.

Lastly, we need to understand the flaws with this data. Chiefly, I cannot tell whether a user requested one row from a million-row table, or if [s]he needed all of the data in the table. This is a pretty important issue, especially with large historical data stores, and it’s where I have to make the biggest assumption: if even one person looked at one row in the table, I count all pages in the table as having been accessed.

Now, you may make different decisions than I did above, and that’s fine… each environment and project has different needs. But these assumptions are very important to understanding the output of the query below:

USE master;
GO
 
/* 
    This will give you an approximation of how much data is being utilized on a server.
    Since the data is only valid as of the last server reboot, we should start off with
    an idea of how much data we've accrued.  
*/
 
/* Find out when the server was last rebooted */
-- 2008
SELECT sqlserver_start_time FROM sys.dm_os_sys_info;
-- 2005
SELECT create_date FROM sys.databases WHERE name = 'tempdb';
 
 
/* Create a temporary table to hold our data, since we're going to iterate through databases */
IF OBJECT_ID('tempdb..#Results') IS NOT NULL
    DROP TABLE #Results;
 
CREATE TABLE #Results
(
      databaseName  NVARCHAR(128)
    , tableName     NVARCHAR(128)
    , indexID       INT
    , records       BIGINT
    , activity      BIGINT
    , totalPages    BIGINT
);
 
/*  
    sp_foreachdb was written by SQL MVP Aaron Bertrand and can be downloaded 
    at http://www.mssqltips.com/sqlservertip/2201/making-a-more-reliable-and-flexible-spmsforeachdb/
    Alternatively, you can also use sys.sp_MSforeachdb
*/
--EXECUTE master.dbo.sp_foreachdb
EXECUTE sys.sp_MSforeachdb
'   USE ?; 
 
    -- You can gleam a lot of information about historical data usage from partitions
    -- but for now, we will just roll up any partitions we may have
    WITH myCTE AS
    (
        SELECT p.[object_id] AS objectID
            , p.index_id
            , SUM(p.[rows]) AS records
            , SUM(au.total_pages) AS totalPages
        FROM sys.partitions AS p WITH (NOLOCK)
        JOIN sys.allocation_units AS au WITH (NOLOCK)
            ON p.hobt_id = au.container_id
        GROUP BY p.[object_id] 
            , p.index_id
    )
 
    -- Grab all tables and join to our usage stats DMV
    INSERT INTO #Results
    SELECT DB_NAME() AS databaseName
        , t.name
        , x.index_id
        , MAX(x.records) AS records
        , ISNULL(SUM(us.user_lookups + us.user_scans + us.user_seeks), 0) AS activity
        , SUM(x.totalPages) AS totalPages
    FROM sys.tables AS t WITH (NOLOCK)
    JOIN myCTE AS x
        ON t.[object_id] = x.objectID
    LEFT JOIN sys.dm_db_index_usage_stats AS us WITH (NOLOCK)
        ON us.[object_id] = x.objectID
        AND us.index_id = x.index_id
        AND us.database_id = DB_ID()
    GROUP BY t.name
    , x.index_id;'
 
/* Because we're looping through databases, make sure we're not missing any major ones */
SELECT * FROM sys.databases WHERE name NOT IN (SELECT databaseName FROM #Results);
 
/* Retrieve actual % data utilization, which is performed at the index level */
SELECT databaseName
    , SUM(queriedPages) AS TotalQueriedPages
    , SUM(totalPages) AS TotalPages
    , CAST(SUM(queriedPages) AS FLOAT) / REPLACE(SUM(totalPages), 0, 1) AS '%DataUtil'
FROM (
    SELECT databaseName
        , tableName
        , indexID
        , CASE -- If we have any activity at all on an index, count it as activity
            WHEN activity = 0 THEN 0.0
            ELSE totalPages
          END AS queriedPages
        , totalPages
    FROM #Results
    WHERE databaseName NOT IN ('master', 'tempdb', 'msdb', 'model')
) x
GROUP BY databaseName
ORDER BY databaseName;
 
/* Retrieve % content utilization, which is performed at the table level */
SELECT databaseName
    , SUM(queriedPages) AS TotalQueriedPages
    , SUM(totalPages) AS TotalPages
    , CAST(SUM(queriedPages) AS FLOAT) / REPLACE(SUM(totalPages), 0, 1) AS '%ContentUtil'
FROM (
    SELECT databaseName
        , tableName
        , MAX(records) AS records
        , CASE WHEN SUM(activity) > 0 THEN SUM(totalPages) ELSE 0 END AS queriedPages
        , SUM(totalPages) AS totalPages
    FROM #Results
    WHERE databaseName NOT IN ('master', 'tempdb', 'msdb', 'model')
    GROUP BY databaseName
        , tableName
) x
GROUP BY databaseName
ORDER BY databaseName;

Results:

databaseName               TotalQueriedPages   TotalPages           %DataUtil
-------------------------- ------------------- -------------------- ----------------------
Database1 		   127618701           130607247            0.969619893356378
Database2 		   567188              1614958              0.351209133612143
Database3 		   34269036            34579469             0.991022620966216
Database4 		   137970594           170733391            0.803399928206158
Database5 		   74632930            101543575            0.66909214627557
Database6 		   55809933            72884205             0.765734157938039
Database7 		   560810026           620609815            0.902175272517656
 
databaseName               TotalQueriedPages   TotalPages           %ContentUtil
-------------------------- ------------------- -------------------- ----------------------
Database1 		   127763715           130607247            0.970721679051682
Database2 		   571125              1614958              0.353646967908763
Database3 		   34269036            34579469             0.991022620966216
Database4 		   137970921           170733391            0.803399928206158
Database5 		   96144726            101543575            0.861947682777784
Database6 		   72269666            72884205             0.991568146820268
Database7 		   620525938           620609815            0.998240279711804

The first result set examines the utilization of indexes, and the second result set examines the utilization of data at the content (table) level. For example, if we look at Database6, we’ll see that we are only utilizing 77% of our indexes, but we’re looking at 99% of our table data. So this is a good indicator that we have some unused indexes to clean up in that database.

Know a better way to answer this question using SQL Server DMV’s? Please leave me a comment so I can learn from your experience. :)

In unrelated news, this may be my last blog post for a little while. I’m due with my second child a week from today and expect all of my free time to be consumed by him for a little while. That and, quite frankly, I do not trust myself near a computer, especially a database, in such a sleep-deprived state. :)

Index Defrag Script, v4.1

It’s been quite some time since my last index defrag script update. A big part of the reason for that is because I wanted to implement many of the suggestions I’ve received, but I just haven’t had the time. I still have those changes planned, but I’m not sure quite when I’ll get to it. Rather than continue to wait for a major release, I’m releasing a small update to my defrag that will take care of the most common complaints I receive.

Change Log:

  • Bug fix for databases containing spaces or special characters
  • Support for case-sensitive databases
  • Re-executable CREATE script (for those who want to re-run the whole script)
  • Comma-delimited list of databases is now supported for the @database parameter

Feature List:

  • Defrag a single database, a list of databases, or all databases (@database)
  • Time Limitations: stop defragging after the specified amount of time has elapsed (@timeLimit). Please note, it will not kill a defrag that is currently in process, even if it exceeds the threshold.
  • Optional stop-and-resume functionality: pick up where your defrag last left off without having to rescan sys.dm_db_index_physical_stats. (@forceRescan)
  • Defrag scheduling: choose which days to defrag certain indexes, or exclude certain indexes altogether, by using the dbo.dba_indexDefragExclusion table.
  • Defrag priority: choose whether to defrag indexes in ascending or descending order by range_scan_count (default), fragmentation, or page_count.
  • Current partition exclusion: choose whether or not to exclude the right-most populated partition from the defrag process, common for sliding-window tables (@excludeMaxPartition)
  • Commands-only mode: Choose to just log the current defrag status and print the defrag commands, rather than executing them, by using @executeSQL.
  • … and tons more! Please read the parameter list and notes section for details of all the options available.

FAQ:

I often receive the same questions about this script, so allow me to answer them here:

“I keep running the script, but my index is still fragmented. Why?”
This is most likely a very small index. Here’s what Microsoft has to say:

“In general, fragmentation on small indexes is often not controllable. The pages of small indexes are stored on mixed extents. Mixed extents are shared by up to eight objects, so the fragmentation in a small index might not be reduced after reorganizing or rebuilding the index. For more information about mixed extents, see Understanding Pages and Extents.”

“What database should I create it in?” or “Can I create this in the MASTER database?”
It’s up to you where you create it. You could technically create it in the MASTER database, but I recommend creating a utility database for your DBA administrative tasks.

“Can I run this againt a SharePoint database?”
Yes, you can.

“What are the minimum requirements to run this script?” or “Will this run on SQL Server 2000 instances?”
You need to be on SQL Server 2005 SP2 or higher.

Special thanks to Richard Yanger for his assistance with beta testing. :)

You can download a text file of this script here: dba_indexDefrag_sp_v41

/*** Scroll down to the see important notes, disclaimers, and licensing information ***/
 
/* Let's create our parsing function... */
IF EXISTS ( SELECT  [object_id]
            FROM    sys.objects
            WHERE   name = 'dba_parseString_udf' )
    DROP FUNCTION dbo.dba_parseString_udf;
GO
 
CREATE FUNCTION dbo.dba_parseString_udf
(
          @stringToParse VARCHAR(8000)  
        , @delimiter     CHAR(1)
)
RETURNS @parsedString TABLE (stringValue VARCHAR(128))
AS
/*********************************************************************************
    Name:       dba_parseString_udf
 
    Author:     Michelle Ufford, http://sqlfool.com
 
    Purpose:    This function parses string input using a variable delimiter.
 
    Notes:      Two common delimiter values are space (' ') and comma (',')
 
    Date        Initials    Description
    ----------------------------------------------------------------------------
    2011-05-20  MFU         Initial Release
*********************************************************************************
Usage: 		
    SELECT *
    FROM dba_parseString_udf(<string>, <delimiter>);
 
Test Cases:
 
    1.  multiple strings separated by space
        SELECT * FROM dbo.dba_parseString_udf('  aaa  bbb  ccc ', ' ');
 
    2.  multiple strings separated by comma
        SELECT * FROM dbo.dba_parseString_udf(',aaa,bbb,,,ccc,', ',');
*********************************************************************************/
BEGIN
 
    /* Declare variables */
    DECLARE @trimmedString  VARCHAR(8000);
 
    /* We need to trim our string input in case the user entered extra spaces */
    SET @trimmedString = LTRIM(RTRIM(@stringToParse));
 
    /* Let's create a recursive CTE to break down our string for us */
    WITH parseCTE (StartPos, EndPos)
    AS
    (
        SELECT 1 AS StartPos
            , CHARINDEX(@delimiter, @trimmedString + @delimiter) AS EndPos
        UNION ALL
        SELECT EndPos + 1 AS StartPos
            , CHARINDEX(@delimiter, @trimmedString + @delimiter , EndPos + 1) AS EndPos
        FROM parseCTE
        WHERE CHARINDEX(@delimiter, @trimmedString + @delimiter, EndPos + 1) <> 0
    )
 
    /* Let's take the results and stick it in a table */  
    INSERT INTO @parsedString
    SELECT SUBSTRING(@trimmedString, StartPos, EndPos - StartPos)
    FROM parseCTE
    WHERE LEN(LTRIM(RTRIM(SUBSTRING(@trimmedString, StartPos, EndPos - StartPos)))) > 0
    OPTION (MaxRecursion 8000);
 
    RETURN;   
END
GO
 
/* First, we need to take care of schema updates, in case you have a legacy 
   version of the script installed */
DECLARE @indexDefragLog_rename      VARCHAR(128)
  , @indexDefragExclusion_rename    VARCHAR(128)
  , @indexDefragStatus_rename       VARCHAR(128);
 
SELECT  @indexDefragLog_rename = 'dba_indexDefragLog_obsolete_' + CONVERT(VARCHAR(10), GETDATE(), 112)
      , @indexDefragExclusion_rename = 'dba_indexDefragExclusion_obsolete_' + CONVERT(VARCHAR(10), GETDATE(), 112);
 
IF EXISTS ( SELECT  [object_id]
            FROM    sys.indexes
            WHERE   name = 'PK_indexDefragLog' ) 
    EXECUTE sp_rename dba_indexDefragLog, @indexDefragLog_rename;
 
IF EXISTS ( SELECT  [object_id]
            FROM    sys.indexes
            WHERE   name = 'PK_indexDefragExclusion' ) 
    EXECUTE sp_rename dba_indexDefragExclusion, @indexDefragExclusion_rename;
 
IF NOT EXISTS ( SELECT  [object_id]
                FROM    sys.indexes
                WHERE   name = 'PK_indexDefragLog_v40' )
BEGIN
 
    CREATE TABLE dbo.dba_indexDefragLog
    (
         indexDefrag_id     INT IDENTITY(1, 1)  NOT NULL
       , databaseID         INT                 NOT NULL
       , databaseName       NVARCHAR(128)       NOT NULL
       , objectID           INT                 NOT NULL
       , objectName         NVARCHAR(128)       NOT NULL
       , indexID            INT                 NOT NULL
       , indexName          NVARCHAR(128)       NOT NULL
       , partitionNumber    SMALLINT            NOT NULL
       , fragmentation      FLOAT               NOT NULL
       , page_count         INT                 NOT NULL
       , dateTimeStart      DATETIME            NOT NULL
       , dateTimeEnd        DATETIME            NULL
       , durationSeconds    INT                 NULL
       , sqlStatement       VARCHAR(4000)       NULL
       , errorMessage       VARCHAR(1000)       NULL 
 
        CONSTRAINT PK_indexDefragLog_v40 
            PRIMARY KEY CLUSTERED (indexDefrag_id)
    );
 
    PRINT 'dba_indexDefragLog Table Created';
 
END
 
IF NOT EXISTS ( SELECT  [object_id]
                FROM    sys.indexes
                WHERE   name = 'PK_indexDefragExclusion_v40' )
BEGIN
 
    CREATE TABLE dbo.dba_indexDefragExclusion
    (
         databaseID         INT             NOT NULL
       , databaseName       NVARCHAR(128)   NOT NULL
       , objectID           INT             NOT NULL
       , objectName         NVARCHAR(128)   NOT NULL
       , indexID            INT             NOT NULL
       , indexName          NVARCHAR(128)   NOT NULL
       , exclusionMask      INT             NOT NULL
            /* 1=Sunday, 2=Monday, 4=Tuesday, 8=Wednesday, 16=Thursday, 32=Friday, 64=Saturday */
 
         CONSTRAINT PK_indexDefragExclusion_v40 
            PRIMARY KEY CLUSTERED (databaseID, objectID, indexID)
    );
 
    PRINT 'dba_indexDefragExclusion Table Created';
 
END
 
IF NOT EXISTS ( SELECT  [object_id]
                FROM    sys.indexes
                WHERE   name = 'PK_indexDefragStatus_v40' )
BEGIN
 
    CREATE TABLE dbo.dba_indexDefragStatus
    (
         databaseID         INT             NOT NULL
       , databaseName       NVARCHAR(128)   NOT NULL
       , objectID           INT             NOT NULL
       , indexID            INT             NOT NULL
       , partitionNumber    SMALLINT        NOT NULL
       , fragmentation      FLOAT           NOT NULL
       , page_count         INT             NOT NULL
       , range_scan_count   BIGINT          NOT NULL
       , schemaName         NVARCHAR(128)   NULL
       , objectName         NVARCHAR(128)   NULL
       , indexName          NVARCHAR(128)   NULL
       , scanDate           DATETIME        NOT NULL
       , defragDate         DATETIME        NULL
       , printStatus        BIT DEFAULT (0) NOT NULL
       , exclusionMask      INT DEFAULT (0) NOT NULL
 
        CONSTRAINT PK_indexDefragStatus_v40 
            PRIMARY KEY CLUSTERED (databaseID, objectID, indexID, partitionNumber)
    );
 
    PRINT 'dba_indexDefragStatus Table Created';
 
END;
 
IF OBJECTPROPERTY(OBJECT_ID('dbo.dba_indexDefrag_sp'), N'IsProcedure') = 1 
    BEGIN
        DROP PROCEDURE dbo.dba_indexDefrag_sp;
        PRINT 'Procedure dba_indexDefrag_sp dropped';
    END;
Go
 
CREATE PROCEDURE dbo.dba_indexDefrag_sp
 
    /* Declare Parameters */
    @minFragmentation       FLOAT               = 10.0  
        /* in percent, will not defrag if fragmentation less than specified */
  , @rebuildThreshold       FLOAT               = 30.0  
        /* in percent, greater than @rebuildThreshold will result in rebuild instead of reorg */
  , @executeSQL             BIT                 = 1     
        /* 1 = execute; 0 = print command only */
  , @defragOrderColumn      NVARCHAR(20)        = 'range_scan_count'
        /* Valid options are: range_scan_count, fragmentation, page_count */
  , @defragSortOrder        NVARCHAR(4)         = 'DESC'
        /* Valid options are: ASC, DESC */
  , @timeLimit              INT                 = 720 /* defaulted to 12 hours */
        /* Optional time limitation; expressed in minutes */
  , @database               VARCHAR(128)        = NULL
        /* Option to specify one or more database names, separated by commas; NULL will return all */
  , @tableName              VARCHAR(4000)       = NULL  -- databaseName.schema.tableName
        /* Option to specify a table name; null will return all */
  , @forceRescan            BIT                 = 0
        /* Whether or not to force a rescan of indexes; 1 = force, 0 = use existing scan, if available */
  , @scanMode               VARCHAR(10)         = N'LIMITED'
        /* Options are LIMITED, SAMPLED, and DETAILED */
  , @minPageCount           INT                 = 8 
        /*  MS recommends > 1 extent (8 pages) */
  , @maxPageCount           INT                 = NULL
        /* NULL = no limit */
  , @excludeMaxPartition    BIT                 = 0
        /* 1 = exclude right-most populated partition; 0 = do not exclude; see notes for caveats */
  , @onlineRebuild          BIT                 = 1     
        /* 1 = online rebuild; 0 = offline rebuild; only in Enterprise */
  , @sortInTempDB           BIT                 = 1
        /* 1 = perform sort operation in TempDB; 0 = perform sort operation in the index's database */
  , @maxDopRestriction      TINYINT             = NULL
        /* Option to restrict the number of processors for the operation; only in Enterprise */
  , @printCommands          BIT                 = 0     
        /* 1 = print commands; 0 = do not print commands */
  , @printFragmentation     BIT                 = 0
        /* 1 = print fragmentation prior to defrag; 
           0 = do not print */
  , @defragDelay            CHAR(8)             = '00:00:05'
        /* time to wait between defrag commands */
  , @debugMode              BIT                 = 0
        /* display some useful comments to help determine if/WHERE issues occur */
AS /*********************************************************************************
    Name:       dba_indexDefrag_sp
 
    Author:     Michelle Ufford, http://sqlfool.com
 
    Purpose:    Defrags one or more indexes for one or more databases
 
    Notes:
 
    CAUTION: TRANSACTION LOG SIZE SHOULD BE MONITORED CLOSELY WHEN DEFRAGMENTING.
             DO NOT RUN UNATTENDED ON LARGE DATABASES DURING BUSINESS HOURS.
 
      @minFragmentation     defaulted to 10%, will not defrag if fragmentation 
                            is less than that
 
      @rebuildThreshold     defaulted to 30% AS recommended by Microsoft in BOL;
                            greater than 30% will result in rebuild instead
 
      @executeSQL           1 = execute the SQL generated by this proc; 
                            0 = print command only
 
      @defragOrderColumn    Defines how to prioritize the order of defrags.  Only
                            used if @executeSQL = 1.  
                            Valid options are: 
                            range_scan_count = count of range and table scans on the
                                               index; in general, this is what benefits 
                                               the most FROM defragmentation
                            fragmentation    = amount of fragmentation in the index;
                                               the higher the number, the worse it is
                            page_count       = number of pages in the index; affects
                                               how long it takes to defrag an index
 
      @defragSortOrder      The sort order of the ORDER BY clause.
                            Valid options are ASC (ascending) or DESC (descending).
 
      @timeLimit            Optional, limits how much time can be spent performing 
                            index defrags; expressed in minutes.
 
                            NOTE: The time limit is checked BEFORE an index defrag
                                  is begun, thus a long index defrag can exceed the
                                  time limitation.
 
      @database             Optional, specify specific database name to defrag;
                            If not specified, all non-system databases will
                            be defragged.
 
      @tableName            Specify if you only want to defrag indexes for a 
                            specific table, format = databaseName.schema.tableName;
                            if not specified, all tables will be defragged.
 
      @forceRescan          Whether or not to force a rescan of indexes.  If set
                            to 0, a rescan will not occur until all indexes have
                            been defragged.  This can span multiple executions.
                            1 = force a rescan
                            0 = use previous scan, if there are indexes left to defrag
 
      @scanMode             Specifies which scan mode to use to determine
                            fragmentation levels.  Options are:
                            LIMITED - scans the parent level; quickest mode,
                                      recommended for most cases.
                            SAMPLED - samples 1% of all data pages; if less than
                                      10k pages, performs a DETAILED scan.
                            DETAILED - scans all data pages.  Use great care with
                                       this mode, AS it can cause performance issues.
 
      @minPageCount         Specifies how many pages must exist in an index in order 
                            to be considered for a defrag.  Defaulted to 8 pages, AS 
                            Microsoft recommends only defragging indexes with more 
                            than 1 extent (8 pages).  
 
                            NOTE: The @minPageCount will restrict the indexes that
                            are stored in dba_indexDefragStatus table.
 
      @maxPageCount         Specifies the maximum number of pages that can exist in 
                            an index and still be considered for a defrag.  Useful
                            for scheduling small indexes during business hours and
                            large indexes for non-business hours.
 
                            NOTE: The @maxPageCount will restrict the indexes that
                            are defragged during the current operation; it will not
                            prevent indexes FROM being stored in the 
                            dba_indexDefragStatus table.  This way, a single scan
                            can support multiple page count thresholds.
 
      @excludeMaxPartition  If an index is partitioned, this option specifies whether
                            to exclude the right-most populated partition.  Typically,
                            this is the partition that is currently being written to in
                            a sliding-window scenario.  Enabling this feature may reduce
                            contention.  This may not be applicable in other types of 
                            partitioning scenarios.  Non-partitioned indexes are 
                            unaffected by this option.
                            1 = exclude right-most populated partition
                            0 = do not exclude
 
      @onlineRebuild        1 = online rebuild; 
                            0 = offline rebuild
 
      @sortInTempDB         Specifies whether to defrag the index in TEMPDB or in the
                            database the index belongs to.  Enabling this option may
                            result in faster defrags and prevent database file size 
                            inflation.
                            1 = perform sort operation in TempDB
                            0 = perform sort operation in the index's database 
 
      @maxDopRestriction    Option to specify a processor limit for index rebuilds
 
      @printCommands        1 = print commands to screen; 
                            0 = do not print commands
 
      @printFragmentation   1 = print fragmentation to screen;
                            0 = do not print fragmentation
 
      @defragDelay          Time to wait between defrag commands; gives the
                            server a little time to catch up 
 
      @debugMode            1 = display debug comments; helps with troubleshooting
                            0 = do not display debug comments
 
    Called by:  SQL Agent Job or DBA
 
    ----------------------------------------------------------------------------
    DISCLAIMER: 
    This code and information are provided "AS IS" without warranty of any kind,
    either expressed or implied, including but not limited to the implied 
    warranties or merchantability and/or fitness for a particular purpose.
    ----------------------------------------------------------------------------
    LICENSE: 
    This index defrag script is free to download and use for personal, educational, 
    and internal corporate purposes, provided that this header is preserved. 
    Redistribution or sale of this index defrag script, in whole or in part, is 
    prohibited without the author's express written consent.
    ----------------------------------------------------------------------------
    Date        Initials	Version Description
    ----------------------------------------------------------------------------
    2007-12-18  MFU         1.0     Initial Release
    2008-10-17  MFU         1.1     Added @defragDelay, CIX_temp_indexDefragList
    2008-11-17  MFU         1.2     Added page_count to log table
                                    , added @printFragmentation option
    2009-03-17  MFU         2.0     Provided support for centralized execution
                                    , consolidated Enterprise & Standard versions
                                    , added @debugMode, @maxDopRestriction
                                    , modified LOB and partition logic  
    2009-06-18  MFU         3.0     Fixed bug in LOB logic, added @scanMode option
                                    , added support for stat rebuilds (@rebuildStats)
                                    , support model and msdb defrag
                                    , added columns to the dba_indexDefragLog table
                                    , modified logging to show "in progress" defrags
                                    , added defrag exclusion list (scheduling)
    2009-08-28  MFU         3.1     Fixed read_only bug for database lists
    2010-04-20  MFU         4.0     Added time limit option
                                    , added static table with rescan logic
                                    , added parameters for page count & SORT_IN_TEMPDB
                                    , added try/catch logic and additional debug options
                                    , added options for defrag prioritization
                                    , fixed bug for indexes with allow_page_lock = off
                                    , added option to exclude right-most partition
                                    , removed @rebuildStats option
                                    , refer to http://sqlfool.com for full release notes
    2011-04-28  MFU         4.1     Bug fixes for databases requiring []
                                    , cleaned up the create table section
                                    , updated syntax for case-sensitive databases
                                    , comma-delimited list for @database now supported
*********************************************************************************
    Example of how to call this script:
 
        EXECUTE dbo.dba_indexDefrag_sp
              @executeSQL           = 1
            , @printCommands        = 1
            , @debugMode            = 1
            , @printFragmentation   = 1
            , @forceRescan          = 1
            , @maxDopRestriction    = 1
            , @minPageCount         = 8
            , @maxPageCount         = NULL
            , @minFragmentation     = 1
            , @rebuildThreshold     = 30
            , @defragDelay          = '00:00:05'
            , @defragOrderColumn    = 'page_count'
            , @defragSortOrder      = 'DESC'
            , @excludeMaxPartition  = 1
            , @timeLimit            = NULL
            , @database             = 'sandbox,sandbox_caseSensitive';
*********************************************************************************/																
SET NOCOUNT ON;
SET XACT_ABORT ON;
SET QUOTED_IDENTIFIER ON;
 
BEGIN
 
    BEGIN TRY
 
        /* Just a little validation... */
        IF @minFragmentation IS NULL 
            OR @minFragmentation NOT BETWEEN 0.00 AND 100.0
                SET @minFragmentation = 10.0;
 
        IF @rebuildThreshold IS NULL
            OR @rebuildThreshold NOT BETWEEN 0.00 AND 100.0
                SET @rebuildThreshold = 30.0;
 
        IF @defragDelay NOT LIKE '00:[0-5][0-9]:[0-5][0-9]'
            SET @defragDelay = '00:00:05';
 
        IF @defragOrderColumn IS NULL
            OR @defragOrderColumn NOT IN ('range_scan_count', 'fragmentation', 'page_count')
                SET @defragOrderColumn = 'range_scan_count';
 
        IF @defragSortOrder IS NULL
            OR @defragSortOrder NOT IN ('ASC', 'DESC')
                SET @defragSortOrder = 'DESC';
 
        IF @scanMode NOT IN ('LIMITED', 'SAMPLED', 'DETAILED')
            SET @scanMode = 'LIMITED';
 
        IF @debugMode IS NULL
            SET @debugMode = 0;
 
        IF @forceRescan IS NULL
            SET @forceRescan = 0;
 
        IF @sortInTempDB IS NULL
            SET @sortInTempDB = 1;
 
 
        IF @debugMode = 1 RAISERROR('Undusting the cogs AND starting up...', 0, 42) WITH NOWAIT;
 
        /* Declare our variables */
        DECLARE   @objectID                 INT
                , @databaseID               INT
                , @databaseName             NVARCHAR(128)
                , @indexID                  INT
                , @partitionCount           BIGINT
                , @schemaName               NVARCHAR(128)
                , @objectName               NVARCHAR(128)
                , @indexName                NVARCHAR(128)
                , @partitionNumber          SMALLINT
                , @fragmentation            FLOAT
                , @pageCount                INT
                , @sqlCommand               NVARCHAR(4000)
                , @rebuildCommand           NVARCHAR(200)
                , @datetimestart            DATETIME
                , @dateTimeEnd              DATETIME
                , @containsLOB              BIT
                , @editionCheck             BIT
                , @debugMessage             NVARCHAR(4000)
                , @updateSQL                NVARCHAR(4000)
                , @partitionSQL             NVARCHAR(4000)
                , @partitionSQL_Param       NVARCHAR(1000)
                , @LOB_SQL                  NVARCHAR(4000)
                , @LOB_SQL_Param            NVARCHAR(1000)
                , @indexDefrag_id           INT
                , @startdatetime            DATETIME
                , @enddatetime              DATETIME
                , @getIndexSQL              NVARCHAR(4000)
                , @getIndexSQL_Param        NVARCHAR(4000)
                , @allowPageLockSQL         NVARCHAR(4000)
                , @allowPageLockSQL_Param   NVARCHAR(4000)
                , @allowPageLocks           INT
                , @excludeMaxPartitionSQL   NVARCHAR(4000);
 
        /* Initialize our variables */
        SELECT @startdatetime = GETDATE()
            , @enddatetime = DATEADD(minute, @timeLimit, GETDATE());
 
        /* Create our temporary tables */
        CREATE TABLE #databaseList
        (
              databaseID        INT
            , databaseName      VARCHAR(128)
            , scanStatus        BIT
        );
 
        CREATE TABLE #processor 
        (
              [index]           INT
            , Name              VARCHAR(128)
            , Internal_Value    INT
            , Character_Value   INT
        );
 
        CREATE TABLE #maxPartitionList
        (
              databaseID        INT
            , objectID          INT
            , indexID           INT
            , maxPartition      INT
        );
 
        IF @debugMode = 1 RAISERROR('Beginning validation...', 0, 42) WITH NOWAIT;
 
        /* Make sure we're not exceeding the number of processors we have available */
        INSERT INTO #processor
        EXECUTE xp_msver 'ProcessorCount';
 
        IF @maxDopRestriction IS NOT NULL AND @maxDopRestriction > (SELECT Internal_Value FROM #processor)
            SELECT @maxDopRestriction = Internal_Value
            FROM #processor;
 
        /* Check our server version; 1804890536 = Enterprise, 610778273 = Enterprise Evaluation, -2117995310 = Developer */
        IF (SELECT ServerProperty('EditionID')) IN (1804890536, 610778273, -2117995310) 
            SET @editionCheck = 1 -- supports online rebuilds
        ELSE
            SET @editionCheck = 0; -- does not support online rebuilds
 
        /* Output the parameters we're working with */
        IF @debugMode = 1 
        BEGIN
 
            SELECT @debugMessage = 'Your SELECTed parameters are... 
            Defrag indexes WITH fragmentation greater than ' + CAST(@minFragmentation AS VARCHAR(10)) + ';
            REBUILD indexes WITH fragmentation greater than ' + CAST(@rebuildThreshold AS VARCHAR(10)) + ';
            You' + CASE WHEN @executeSQL = 1 THEN ' DO' ELSE ' DO NOT' END + ' want the commands to be executed automatically; 
            You want to defrag indexes in ' + @defragSortOrder + ' order of the ' + UPPER(@defragOrderColumn) + ' value;
            You have' + CASE WHEN @timeLimit IS NULL THEN ' NOT specified a time limit;' ELSE ' specified a time limit of ' 
                + CAST(@timeLimit AS VARCHAR(10)) END + ' minutes;
            ' + CASE WHEN @database IS NULL THEN 'ALL databases' ELSE 'The ' + @database + ' database(s)' END + ' will be defragged;
            ' + CASE WHEN @tableName IS NULL THEN 'ALL tables' ELSE 'The ' + @tableName + ' TABLE' END + ' will be defragged;
            We' + CASE WHEN EXISTS(SELECT Top 1 * FROM dbo.dba_indexDefragStatus WHERE defragDate IS NULL)
                AND @forceRescan <> 1 THEN ' WILL NOT' ELSE ' WILL' END + ' be rescanning indexes;
            The scan will be performed in ' + @scanMode + ' mode;
            You want to limit defrags to indexes with' + CASE WHEN @maxPageCount IS NULL THEN ' more than ' 
                + CAST(@minPageCount AS VARCHAR(10)) ELSE
                ' BETWEEN ' + CAST(@minPageCount AS VARCHAR(10))
                + ' AND ' + CAST(@maxPageCount AS VARCHAR(10)) END + ' pages;
            Indexes will be defragged' + CASE WHEN @editionCheck = 0 OR @onlineRebuild = 0 THEN ' OFFLINE;' ELSE ' ONLINE;' END + '
            Indexes will be sorted in' + CASE WHEN @sortInTempDB = 0 THEN ' the DATABASE' ELSE ' TEMPDB;' END + '
            Defrag operations will utilize ' + CASE WHEN @editionCheck = 0 OR @maxDopRestriction IS NULL 
                THEN 'system defaults for processors;' 
                ELSE CAST(@maxDopRestriction AS VARCHAR(2)) + ' processors;' END + '
            You' + CASE WHEN @printCommands = 1 THEN ' DO' ELSE ' DO NOT' END + ' want to PRINT the ALTER INDEX commands; 
            You' + CASE WHEN @printFragmentation = 1 THEN ' DO' ELSE ' DO NOT' END + ' want to OUTPUT fragmentation levels; 
            You want to wait ' + @defragDelay + ' (hh:mm:ss) BETWEEN defragging indexes;
            You want to run in' + CASE WHEN @debugMode = 1 THEN ' DEBUG' ELSE ' SILENT' END + ' mode.';
 
            RAISERROR(@debugMessage, 0, 42) WITH NOWAIT;
 
        END;
 
        IF @debugMode = 1 RAISERROR('Grabbing a list of our databases...', 0, 42) WITH NOWAIT;
 
        /* Retrieve the list of databases to investigate */
        /* If @database is NULL, it means we want to defrag *all* databases */
        IF @database IS NULL
        BEGIN
 
            INSERT INTO #databaseList
            SELECT database_id
                , name
                , 0 -- not scanned yet for fragmentation
            FROM sys.databases
            WHERE [name] NOT IN ('master', 'tempdb')-- exclude system databases
                AND [state] = 0 -- state must be ONLINE
                AND is_read_only = 0;  -- cannot be read_only
 
        END;
        ELSE
        /* Otherwise, we're going to just defrag our list of databases */
        BEGIN
 
            INSERT INTO #databaseList
            SELECT database_id
                , name
                , 0 -- not scanned yet for fragmentation
            FROM sys.databases AS d
            JOIN dbo.dba_parseString_udf(@database, ',') AS x
                ON d.name = x.stringValue
            WHERE [name] NOT IN ('master', 'tempdb')-- exclude system databases
                AND [state] = 0 -- state must be ONLINE
                AND is_read_only = 0;  -- cannot be read_only
 
        END; 
 
        /* Check to see IF we have indexes in need of defrag; otherwise, re-scan the database(s) */
        IF NOT EXISTS(SELECT Top 1 * FROM dbo.dba_indexDefragStatus WHERE defragDate IS NULL)
            OR @forceRescan = 1
        BEGIN
 
            /* Truncate our list of indexes to prepare for a new scan */
            TRUNCATE TABLE dbo.dba_indexDefragStatus;
 
            IF @debugMode = 1 RAISERROR('Looping through our list of databases and checking for fragmentation...', 0, 42) WITH NOWAIT;
 
            /* Loop through our list of databases */
            WHILE (SELECT COUNT(*) FROM #databaseList WHERE scanStatus = 0) > 0
            BEGIN
 
                SELECT Top 1 @databaseID = databaseID
                FROM #databaseList
                WHERE scanStatus = 0;
 
                SELECT @debugMessage = '  working on ' + DB_NAME(@databaseID) + '...';
 
                IF @debugMode = 1
                    RAISERROR(@debugMessage, 0, 42) WITH NOWAIT;
 
               /* Determine which indexes to defrag using our user-defined parameters */
                INSERT INTO dbo.dba_indexDefragStatus
                (
                      databaseID
                    , databaseName
                    , objectID
                    , indexID
                    , partitionNumber
                    , fragmentation
                    , page_count
                    , range_scan_count
                    , scanDate
                )
                SELECT
                      ps.database_id AS 'databaseID'
                    , QUOTENAME(DB_NAME(ps.database_id)) AS 'databaseName'
                    , ps.[object_id] AS 'objectID'
                    , ps.index_id AS 'indexID'
                    , ps.partition_number AS 'partitionNumber'
                    , SUM(ps.avg_fragmentation_in_percent) AS 'fragmentation'
                    , SUM(ps.page_count) AS 'page_count'
                    , os.range_scan_count
                    , GETDATE() AS 'scanDate'
                FROM sys.dm_db_index_physical_stats(@databaseID, OBJECT_ID(@tableName), NULL , NULL, @scanMode) AS ps
                JOIN sys.dm_db_index_operational_stats(@databaseID, OBJECT_ID(@tableName), NULL , NULL) AS os
                    ON ps.database_id = os.database_id
                    AND ps.[object_id] = os.[object_id]
                    AND ps.index_id = os.index_id
                    AND ps.partition_number = os.partition_number
                WHERE avg_fragmentation_in_percent >= @minFragmentation 
                    AND ps.index_id > 0 -- ignore heaps
                    AND ps.page_count > @minPageCount 
                    AND ps.index_level = 0 -- leaf-level nodes only, supports @scanMode
                GROUP BY ps.database_id 
                    , QUOTENAME(DB_NAME(ps.database_id)) 
                    , ps.[object_id]
                    , ps.index_id 
                    , ps.partition_number 
                    , os.range_scan_count
                OPTION (MAXDOP 2);
 
                /* Do we want to exclude right-most populated partition of our partitioned indexes? */
                IF @excludeMaxPartition = 1
                BEGIN
 
                    SET @excludeMaxPartitionSQL = '
                        SELECT ' + CAST(@databaseID AS VARCHAR(10)) + ' AS [databaseID]
                            , [object_id]
                            , index_id
                            , MAX(partition_number) AS [maxPartition]
                        FROM [' + DB_NAME(@databaseID) + '].sys.partitions
                        WHERE partition_number > 1
                            AND [rows] > 0
                        GROUP BY object_id
                            , index_id;';
 
                    INSERT INTO #maxPartitionList
                    EXECUTE sp_executesql @excludeMaxPartitionSQL;
 
                END;
 
                /* Keep track of which databases have already been scanned */
                UPDATE #databaseList
                SET scanStatus = 1
                WHERE databaseID = @databaseID;
 
            END
 
            /* We don't want to defrag the right-most populated partition, so
               delete any records for partitioned indexes where partition = MAX(partition) */
            IF @excludeMaxPartition = 1
            BEGIN
 
                DELETE ids
                FROM dbo.dba_indexDefragStatus AS ids
                JOIN #maxPartitionList AS mpl
                    ON ids.databaseID = mpl.databaseID
                    AND ids.objectID = mpl.objectID
                    AND ids.indexID = mpl.indexID
                    AND ids.partitionNumber = mpl.maxPartition;
 
            END;
 
            /* Update our exclusion mask for any index that has a restriction ON the days it can be defragged */
            UPDATE ids
            SET ids.exclusionMask = ide.exclusionMask
            FROM dbo.dba_indexDefragStatus AS ids
            JOIN dbo.dba_indexDefragExclusion AS ide
                ON ids.databaseID = ide.databaseID
                AND ids.objectID = ide.objectID
                AND ids.indexID = ide.indexID;
 
        END
 
        SELECT @debugMessage = 'Looping through our list... there are ' + CAST(COUNT(*) AS VARCHAR(10)) + ' indexes to defrag!'
        FROM dbo.dba_indexDefragStatus
        WHERE defragDate IS NULL
            AND page_count BETWEEN @minPageCount AND ISNULL(@maxPageCount, page_count);
 
        IF @debugMode = 1 RAISERROR(@debugMessage, 0, 42) WITH NOWAIT;
 
        /* Begin our loop for defragging */
        WHILE (SELECT COUNT(*) 
               FROM dbo.dba_indexDefragStatus 
               WHERE (
                           (@executeSQL = 1 AND defragDate IS NULL) 
                        OR (@executeSQL = 0 AND defragDate IS NULL AND printStatus = 0)
                     )
                AND exclusionMask & POWER(2, DATEPART(weekday, GETDATE())-1) = 0
                AND page_count BETWEEN @minPageCount AND ISNULL(@maxPageCount, page_count)) > 0
        BEGIN
 
            /* Check to see IF we need to exit our loop because of our time limit */        
            IF ISNULL(@enddatetime, GETDATE()) < GETDATE()
            BEGIN
                RAISERROR('Our time limit has been exceeded!', 11, 42) WITH NOWAIT;
            END;
 
            IF @debugMode = 1 RAISERROR('  Picking an index to beat into shape...', 0, 42) WITH NOWAIT;
 
            /* Grab the index with the highest priority, based on the values submitted; 
               Look at the exclusion mask to ensure it can be defragged today */
            SET @getIndexSQL = N'
            SELECT TOP 1 
                  @objectID_Out         = objectID
                , @indexID_Out          = indexID
                , @databaseID_Out       = databaseID
                , @databaseName_Out     = databaseName
                , @fragmentation_Out    = fragmentation
                , @partitionNumber_Out  = partitionNumber
                , @pageCount_Out        = page_count
            FROM dbo.dba_indexDefragStatus
            WHERE defragDate IS NULL ' 
                + CASE WHEN @executeSQL = 0 THEN 'AND printStatus = 0' ELSE '' END + '
                AND exclusionMask & Power(2, DatePart(weekday, GETDATE())-1) = 0
                AND page_count BETWEEN @p_minPageCount AND ISNULL(@p_maxPageCount, page_count)
            ORDER BY + ' + @defragOrderColumn + ' ' + @defragSortOrder;
 
            SET @getIndexSQL_Param = N'@objectID_Out        INT OUTPUT
                                     , @indexID_Out         INT OUTPUT
                                     , @databaseID_Out      INT OUTPUT
                                     , @databaseName_Out    NVARCHAR(128) OUTPUT
                                     , @fragmentation_Out   INT OUTPUT
                                     , @partitionNumber_Out INT OUTPUT
                                     , @pageCount_Out       INT OUTPUT
                                     , @p_minPageCount      INT
                                     , @p_maxPageCount      INT';
 
            EXECUTE sp_executesql @getIndexSQL
                , @getIndexSQL_Param
                , @p_minPageCount       = @minPageCount
                , @p_maxPageCount       = @maxPageCount
                , @objectID_Out         = @objectID         OUTPUT
                , @indexID_Out          = @indexID          OUTPUT
                , @databaseID_Out       = @databaseID       OUTPUT
                , @databaseName_Out     = @databaseName     OUTPUT
                , @fragmentation_Out    = @fragmentation    OUTPUT
                , @partitionNumber_Out  = @partitionNumber  OUTPUT
                , @pageCount_Out        = @pageCount        OUTPUT;
 
            IF @debugMode = 1 RAISERROR('  Looking up the specifics for our index...', 0, 42) WITH NOWAIT;
 
            /* Look up index information */
            SELECT @updateSQL = N'UPDATE ids
                SET schemaName = QUOTENAME(s.name)
                    , objectName = QUOTENAME(o.name)
                    , indexName = QUOTENAME(i.name)
                FROM dbo.dba_indexDefragStatus AS ids
                INNER JOIN ' + @databaseName + '.sys.objects AS o
                    ON ids.objectID = o.[object_id]
                INNER JOIN ' + @databaseName + '.sys.indexes AS i
                    ON o.[object_id] = i.[object_id]
                    AND ids.indexID = i.index_id
                INNER JOIN ' + @databaseName + '.sys.schemas AS s
                    ON o.schema_id = s.schema_id
                WHERE o.[object_id] = ' + CAST(@objectID AS VARCHAR(10)) + '
                    AND i.index_id = ' + CAST(@indexID AS VARCHAR(10)) + '
                    AND i.type > 0
                    AND ids.databaseID = ' + CAST(@databaseID AS VARCHAR(10));
 
            EXECUTE sp_executesql @updateSQL;
 
            /* Grab our object names */
            SELECT @objectName  = objectName
                , @schemaName   = schemaName
                , @indexName    = indexName
            FROM dbo.dba_indexDefragStatus
            WHERE objectID = @objectID
                AND indexID = @indexID
                AND databaseID = @databaseID;
 
            IF @debugMode = 1 RAISERROR('  Grabbing the partition COUNT...', 0, 42) WITH NOWAIT;
 
            /* Determine if the index is partitioned */
            SELECT @partitionSQL = 'SELECT @partitionCount_OUT = COUNT(*)
                                        FROM ' + @databaseName + '.sys.partitions
                                        WHERE object_id = ' + CAST(@objectID AS VARCHAR(10)) + '
                                            AND index_id = ' + CAST(@indexID AS VARCHAR(10)) + ';'
                , @partitionSQL_Param = '@partitionCount_OUT INT OUTPUT';
 
            EXECUTE sp_executesql @partitionSQL, @partitionSQL_Param, @partitionCount_OUT = @partitionCount OUTPUT;
 
            IF @debugMode = 1 RAISERROR('  Seeing IF there are any LOBs to be handled...', 0, 42) WITH NOWAIT;
 
            /* Determine if the table contains LOBs */
            SELECT @LOB_SQL = ' SELECT @containsLOB_OUT = COUNT(*)
                                FROM ' + @databaseName + '.sys.columns WITH (NoLock) 
                                WHERE [object_id] = ' + CAST(@objectID AS VARCHAR(10)) + '
                                   AND (system_type_id IN (34, 35, 99)
                                            OR max_length = -1);'
                                /*  system_type_id --> 34 = IMAGE, 35 = TEXT, 99 = NTEXT
                                    max_length = -1 --> VARBINARY(MAX), VARCHAR(MAX), NVARCHAR(MAX), XML */
                    , @LOB_SQL_Param = '@containsLOB_OUT INT OUTPUT';
 
            EXECUTE sp_executesql @LOB_SQL, @LOB_SQL_Param, @containsLOB_OUT = @containsLOB OUTPUT;
 
            IF @debugMode = 1 RAISERROR('  Checking for indexes that do NOT allow page locks...', 0, 42) WITH NOWAIT;
 
            /* Determine if page locks are allowed; for those indexes, we need to always REBUILD */
            SELECT @allowPageLockSQL = 'SELECT @allowPageLocks_OUT = COUNT(*)
                                        FROM ' + @databaseName + '.sys.indexes
                                        WHERE object_id = ' + CAST(@objectID AS VARCHAR(10)) + '
                                            AND index_id = ' + CAST(@indexID AS VARCHAR(10)) + '
                                            AND Allow_Page_Locks = 0;'
                , @allowPageLockSQL_Param = '@allowPageLocks_OUT INT OUTPUT';
 
            EXECUTE sp_executesql @allowPageLockSQL, @allowPageLockSQL_Param, @allowPageLocks_OUT = @allowPageLocks OUTPUT;
 
            IF @debugMode = 1 RAISERROR('  Building our SQL statements...', 0, 42) WITH NOWAIT;
 
            /* IF there's not a lot of fragmentation, or if we have a LOB, we should REORGANIZE */
            IF (@fragmentation < @rebuildThreshold OR @containsLOB >= 1 OR @partitionCount > 1)
                AND @allowPageLocks = 0
            BEGIN
 
                SET @sqlCommand = N'ALTER INDEX ' + @indexName + N' ON ' + @databaseName + N'.' 
                                    + @schemaName + N'.' + @objectName + N' REORGANIZE';
 
                /* If our index is partitioned, we should always REORGANIZE */
                IF @partitionCount > 1
                    SET @sqlCommand = @sqlCommand + N' PARTITION = ' 
                                    + CAST(@partitionNumber AS NVARCHAR(10));
 
            END
            /* If the index is heavily fragmented and doesn't contain any partitions or LOB's, 
               or if the index does not allow page locks, REBUILD it */
            ELSE IF (@fragmentation >= @rebuildThreshold OR @allowPageLocks <> 0)
                AND ISNULL(@containsLOB, 0) != 1 AND @partitionCount <= 1
            BEGIN
 
                /* Set online REBUILD options; requires Enterprise Edition */
                IF @onlineRebuild = 1 AND @editionCheck = 1 
                    SET @rebuildCommand = N' REBUILD WITH (ONLINE = ON';
                ELSE
                    SET @rebuildCommand = N' REBUILD WITH (ONLINE = Off';
 
                /* Set sort operation preferences */
                IF @sortInTempDB = 1 
                    SET @rebuildCommand = @rebuildCommand + N', SORT_IN_TEMPDB = ON';
                ELSE
                    SET @rebuildCommand = @rebuildCommand + N', SORT_IN_TEMPDB = Off';
 
                /* Set processor restriction options; requires Enterprise Edition */
                IF @maxDopRestriction IS NOT NULL AND @editionCheck = 1
                    SET @rebuildCommand = @rebuildCommand + N', MAXDOP = ' + CAST(@maxDopRestriction AS VARCHAR(2)) + N')';
                ELSE
                    SET @rebuildCommand = @rebuildCommand + N')';
 
                SET @sqlCommand = N'ALTER INDEX ' + @indexName + N' ON ' + @databaseName + N'.'
                                + @schemaName + N'.' + @objectName + @rebuildCommand;
 
            END
            ELSE
                /* Print an error message if any indexes happen to not meet the criteria above */
                IF @printCommands = 1 OR @debugMode = 1
                    RAISERROR('We are unable to defrag this index.', 0, 42) WITH NOWAIT;
 
            /* Are we executing the SQL?  IF so, do it */
            IF @executeSQL = 1
            BEGIN
 
                SET @debugMessage = 'Executing: ' + @sqlCommand;
 
                /* Print the commands we're executing if specified to do so */
                IF @printCommands = 1 OR @debugMode = 1
                    RAISERROR(@debugMessage, 0, 42) WITH NOWAIT;
 
                /* Grab the time for logging purposes */
                SET @datetimestart  = GETDATE();
 
                /* Log our actions */
                INSERT INTO dbo.dba_indexDefragLog
                (
                      databaseID
                    , databaseName
                    , objectID
                    , objectName
                    , indexID
                    , indexName
                    , partitionNumber
                    , fragmentation
                    , page_count
                    , DATETIMEStart
                    , sqlStatement
                )
                SELECT
                      @databaseID
                    , @databaseName
                    , @objectID
                    , @objectName
                    , @indexID
                    , @indexName
                    , @partitionNumber
                    , @fragmentation
                    , @pageCount
                    , @datetimestart
                    , @sqlCommand;
 
                SET @indexDefrag_id = SCOPE_IDENTITY();
 
                /* Wrap our execution attempt in a TRY/CATCH and log any errors that occur */
                BEGIN TRY
 
                    /* Execute our defrag! */
                    EXECUTE sp_executesql @sqlCommand;
                    SET @dateTimeEnd = GETDATE();
 
                    /* Update our log with our completion time */
                    UPDATE dbo.dba_indexDefragLog
                    SET dateTimeEnd = @dateTimeEnd
                        , durationSeconds = DATEDIFF(second, @datetimestart, @dateTimeEnd)
                    WHERE indexDefrag_id = @indexDefrag_id;
 
                END TRY
                BEGIN CATCH
 
                    /* Update our log with our error message */
                    UPDATE dbo.dba_indexDefragLog
                    SET dateTimeEnd = GETDATE()
                        , durationSeconds = -1
                        , errorMessage = ERROR_MESSAGE()
                    WHERE indexDefrag_id = @indexDefrag_id;
 
                    IF @debugMode = 1 
                        RAISERROR('  An error has occurred executing this command! Please review the dba_indexDefragLog table for details.'
                            , 0, 42) WITH NOWAIT;
 
                END CATCH
 
                /* Just a little breather for the server */
                WAITFOR DELAY @defragDelay;
 
                UPDATE dbo.dba_indexDefragStatus
                SET defragDate = GETDATE()
                    , printStatus = 1
                WHERE databaseID       = @databaseID
                  AND objectID         = @objectID
                  AND indexID          = @indexID
                  AND partitionNumber  = @partitionNumber;
 
            END
            ELSE
            /* Looks like we're not executing, just printing the commands */
            BEGIN
                IF @debugMode = 1 RAISERROR('  Printing SQL statements...', 0, 42) WITH NOWAIT;
 
                IF @printCommands = 1 OR @debugMode = 1 
                    PRINT ISNULL(@sqlCommand, 'error!');
 
                UPDATE dbo.dba_indexDefragStatus
                SET printStatus = 1
                WHERE databaseID       = @databaseID
                  AND objectID         = @objectID
                  AND indexID          = @indexID
                  AND partitionNumber  = @partitionNumber;
            END
 
        END
 
        /* Do we want to output our fragmentation results? */
        IF @printFragmentation = 1
        BEGIN
 
            IF @debugMode = 1 RAISERROR('  Displaying a summary of our action...', 0, 42) WITH NOWAIT;
 
            SELECT databaseID
                , databaseName
                , objectID
                , objectName
                , indexID
                , indexName
                , partitionNumber
                , fragmentation
                , page_count
                , range_scan_count
            FROM dbo.dba_indexDefragStatus
            WHERE defragDate >= @startdatetime
            ORDER BY defragDate;
 
        END;
 
    END TRY
    BEGIN CATCH
 
        SET @debugMessage = ERROR_MESSAGE() + ' (Line Number: ' + CAST(ERROR_LINE() AS VARCHAR(10)) + ')';
        PRINT @debugMessage;
 
    END CATCH;
 
    /* When everything is said and done, make sure to get rid of our temp table */
    DROP TABLE #databaseList;
    DROP TABLE #processor;
    DROP TABLE #maxPartitionList;
 
    IF @debugMode = 1 RAISERROR('DONE!  Thank you for taking care of your indexes!  :)', 0, 42) WITH NOWAIT;
 
    SET NOCOUNT OFF;
    RETURN 0;
END

T-SQL Script for Estimating Compression Savings

A couple of weeks ago, I was working on a Microsoft PDW proof-of-concept (POC) and had to measure compression ratios. In order to do this, I fired up SSMS and wrote a little script. The script will iterate through all tables in a database and run the sp_estimate_data_compression_savings stored procedure. This will only work in SQL Server 2008+ versions running Enterprise edition.

If you’re not familiar with this stored procedure, it basically will tell you what effect PAGE or ROW compression will have on your table/index/partition, etc. There are pro’s and con’s with compression. What I’ve tended to see is that compression has very positive results on space, IO, and query duration, with a negative impact on CPU and write speed. Like most things, it’s a trade-off and the results will vary by environment, so I recommend you do some testing before you apply compression to all tables. I tend to use compression mostly for my historical tables and partitions and leave my recent data uncompressed. And, back to the script, I use this stored procedure to estimate the impact of compression and to determine whether to use PAGE or ROW compression. PAGE is a higher level of compression, which means it’s also more expensive in terms of CPU, so if the difference between the two results is negligible, I’m more apt to just use ROW compression.

Now that my impromptu compression discussion is done, let’s get to the actual script. One final word of caution, however. This is an IO intensive process, so you may want to run it after peak business hours.

SET NOCOUNT ON;
 
DECLARE @printOnly  BIT = 0 -- change to 1 if you don't want to execute, just print commands
    , @tableName    VARCHAR(256)
    , @schemaName   VARCHAR(100)
    , @sqlStatement NVARCHAR(1000)
    , @tableCount   INT
    , @statusMsg    VARCHAR(1000);
 
IF EXISTS(SELECT * FROM tempdb.sys.tables WHERE name LIKE '%#tables%')
    DROP TABLE #tables; 
 
CREATE TABLE #tables
(
      database_name     sysname
    , schemaName        sysname NULL
    , tableName         sysname NULL
    , processed         bit
);
 
IF EXISTS(SELECT * FROM tempdb.sys.tables WHERE name LIKE '%#compression%')
    DROP TABLE #compressionResults;
 
IF NOT EXISTS(SELECT * FROM tempdb.sys.tables WHERE name LIKE '%#compression%')
BEGIN 
 
    CREATE TABLE #compressionResults
    (
          objectName                    varchar(100)
        , schemaName                    varchar(50)
        , index_id                      int
        , partition_number              int
        , size_current_compression      bigint
        , size_requested_compression    bigint
        , sample_current_compression    bigint
        , sample_requested_compression  bigint
    );
 
END;
 
INSERT INTO #tables
SELECT DB_NAME()
    , SCHEMA_NAME([schema_id])
    , name
    , 0 -- unprocessed
FROM sys.tables;
 
SELECT @tableCount = COUNT(*) FROM #tables;
 
WHILE EXISTS(SELECT * FROM #tables WHERE processed = 0)
BEGIN
 
    SELECT TOP 1 @tableName = tableName
        , @schemaName = schemaName
    FROM #tables WHERE processed = 0;
 
    SELECT @statusMsg = 'Working on ' + CAST(((@tableCount - COUNT(*)) + 1) AS VARCHAR(10)) 
        + ' of ' + CAST(@tableCount AS VARCHAR(10))
    FROM #tables
    WHERE processed = 0;
 
    RAISERROR(@statusMsg, 0, 42) WITH NOWAIT;
 
    SET @sqlStatement = 'EXECUTE sp_estimate_data_compression_savings ''' 
                        + @schemaName + ''', ''' + @tableName + ''', NULL, NULL, ''PAGE'';' -- ROW, PAGE, or NONE
 
    IF @printOnly = 1
    BEGIN 
 
        SELECT @sqlStatement;
 
    END
    ELSE
    BEGIN
 
        INSERT INTO #compressionResults
        EXECUTE sp_executesql @sqlStatement;
 
    END;
 
    UPDATE #tables
    SET processed = 1
    WHERE tableName = @tableName
        AND schemaName = @schemaName;
 
END;
 
SELECT * 
FROM #compressionResults;

Disposable Indexes

Today I had to run an ad hoc query on a 8.5 billion row table. The table had a dozen columns of a variety of data types and was clustered on a bigint identity. There were no other indexes on the table. My query involved a join to a smaller table with a date range restriction. Without an adequate index to use, SQL Server was going to be forced to scan this 8.5 billion row table. Now, I don’t have much patience for waiting for long running queries. I want to run the ad hoc, e-mail the results, and forget about it. But short of adding a nonclustered index, which would take a very long time to build and probably require additional space requisitioned from the SAN team, what could I do? Enter disposable indexes. Now, you might be asking yourself, “What the frilly heck does she mean by a disposable index? Is that new in Denali?” No, dear reader. I am actually referring to filtered indexes, which is available in SQL Server 2008 and 2008 R2. I call them “disposable” because I create them to significantly speed up ad hoc queries, then I drop them when I’m done.

Here, allow me to demonstrate using the AdventureWorks2008R2 database. Although the tables are smaller, this query is very similar in structure to what I needed to run today.

Select Count(Distinct sod.SalesOrderID) As 'distinctCount'
From AdventureWorks2008R2.Sales.SalesOrderDetail As sod
Join AdventureWorks2008R2.Production.Product As p
    On sod.ProductID = p.ProductID
Where sod.ModifiedDate Between '2008-01-01' And '2008-07-31'
    And p.MakeFlag = 0;

Now, let’s take a look at the type of indexes we currently have available:

Select name, has_filter, filter_definition
From sys.indexes
Where object_id = object_id('Sales.SalesOrderDetail');
name                                                    has_filter filter_definition
-----------------------------------------------------------------------------------------
PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID     0          NULL
AK_SalesOrderDetail_rowguid                             0          NULL
IX_SalesOrderDetail_ProductID                           0          NULL
 
(3 row(s) affected)

We need an index on ModifiedDate and ProductID, which it doesn’t look like we have currently. Without this, we’re going to end up scanning on the clustered index. That means SQL Server will have to evaluate each and every single row in the table to see if the row matches the criteria of our query. Not pretty, and certainly not fast. So instead, let’s create a filtered index on date. But we can greatly speed up the time it takes to create our filtered index by doing a little investigating upfront and finding a range of clustering key values that will cover the query. Doing this allows SQL Server to seek on the clustered index, greatly reducing the amount of reads necessary to create our filtered index. So let’s see this in action. First, let’s find out the current max value of the table:

Select Max(SalesOrderDetailID) As 'maxID'
From AdventureWorks2008R2.Sales.SalesOrderDetail;
maxID
-----------
121317

Now we get to do a little guessing. Let’s go back and see what date we get if we look at half of the records:

Select SalesOrderDetailID, ModifiedDate
From AdventureWorks2008R2.Sales.SalesOrderDetail
Where SalesOrderDetailID = (121317/2);
SalesOrderDetailID ModifiedDate
------------------ -----------------------
60658              2007-11-01 00:00:00.000

Okay, SalesOrderDetailID 60658 gets us back to 11/1/2007. That’s a little too far. Let’s see how a SalesOrderDetailID value of 75000 does…

Select SalesOrderDetailID, ModifiedDate
From AdventureWorks2008R2.Sales.SalesOrderDetail
Where SalesOrderDetailID = 75000;
SalesOrderDetailID ModifiedDate
------------------ -----------------------
75000              2007-12-27 00:00:00.000

Okay, SalesOrderDetailID 75000 takes us back to 12/27/2007. That’s close enough to 1/1/2008 for my purposes. Of course, depending on the size of the table, in real life it may make sense to try to get closer to the value you’re looking for. But for now, this will do. And because we’re looking for data through the “current date” (7/31/2008 in the AdventureWorks2008R2 database), we already know our outer limit is 121317.

Now let’s take these ranges and create a filtered index that will cover our query:

Create Nonclustered Index IX_SalesOrderDetail_filtered
    On Sales.SalesOrderDetail(ModifiedDate, ProductID)
    Include (SalesOrderID)
    Where SalesOrderDetailID >= 75000
      And SalesOrderDetailID <  121317;

By having this range identified, SQL Server can perform a seek on the clustered index to create the nonclustered index on just the subset of records that you need for your query. Remember that 8.5 billion row table I mentioned earlier? I was able to create a filtered index that covered my query in 10 seconds. Yes, that’s right… 10 SECONDS.

The last thing we need to do is include our filtered index definition in our ad hoc query to ensure that the filtered index is used. It also doesn’t hurt to explicitly tell SQL Server to use the filtered index if you’re absolutely sure it’s the best index for the job.

Select Count(Distinct sod.SalesOrderID) As 'distinctCount'
From AdventureWorks2008R2.Sales.SalesOrderDetail As sod With (Index(IX_SalesOrderDetail_filtered))
Join AdventureWorks2008R2.Production.Product As p
    On sod.ProductID = p.ProductID
Where sod.ModifiedDate Between '2008-01-01' And '2008-07-31'
    And p.MakeFlag = 0
    And sod.SalesOrderDetailID >= 75000
    And sod.SalesOrderDetailID <  121317;

That’s all there is to it. Using this method, I was able to complete my ad hoc request in 40 seconds: 10 seconds to create the filtered index and 30 seconds to actually execute the ad hoc. Of course, it also took a couple of minutes to write the query, look at existing indexes, and search for the correct identity values. All in all, from the time I received the request to the time I send the e-mail was about 5 minutes. All because of disposable filtered indexes. How’s that for some SQL #awesomesauce? :)

Yet Another PASS Summit Recap & Thoughts on PDW

The SQL blogosphere has been lit up with PASS Summit recaps.

I debated about whether or not to write my own post, until I remembered that this blog serves as a mini-journal for me too. I have a notoriously poor memory–my husband likes to say that my CPU and memory are good, but I must have an unusual clustering strategy–so maybe this blog post will be a good pointer for me when I start prepping for next year’s Summit. ;)

This was definitely the best PASS Summit conference ever. While there will always be opportunities to do things better–improvement is a never-ending process–it was clear that the organizers of this event listened to the feedback they had received the previous year. One of the best changes? Backpacks. These were very useful, as evidenced by their presence everywhere. Nice job, organizers!

My absolute favorite thing about Summit is the chance to meet and reconnect with so many amazing SQL folks. There were entirely too many people to list out, but some highlights include meeting Crys Manson, Jorge Segarra, and Karen Lopez for the first time. I also had a chance encounter with Ola Hallengren in the Sheraton elevator. Apparently we were only staying a few rooms apart this year. We ended up having a couple of really great discussions about index fragmentation, the differences between our scripts, and things we’d like to see changed in future releases of SQL Server.

I had the opportunity to sit on the panel at the WIT luncheon. All of the women on the panel were amazing, and I was honored just to be sitting at the same table as them. I was especially pleased to meet Nora Denzel, a Senior Vice President at Intuit. Intelligent, confident, and witty, she is a great role model for young technical women, myself included. I can only hope that some of her gumption rubbed off on me due to our close proximity. :) After the event, I was pleasantly surprised by how many folks–men and women both–came up to me to tell me how much they enjoyed it. Thanks to the WIT VC for organizing another great event!

The lightning talk sessions were a new feature this year, and I think I like it. The format of the lightning session is 7 speakers presenting on a topic for 5 quick minutes. Watching these sessions is kind of like skipping right to the center of a tootsie pop: all content and no fluff. The standout lightning talk presentation for me was Adam Machanic’s. It was beautifully rehearsed and choreographed. Nice job, Adam!

Another of the many highlights of the week was meeting the Microsoft execs. In addition to meeting Ted Kummert, Mark Souza, and Donald Farmer–all very nice gentlemen–I had the opportunity to speak at length with Jose Blakely about Parallel Data Warehouse (PDW). PDW, formerly codenamed Madison, was officially launched at Summit. Jose was kind enough to explain the PDW architecture, both where it came from and the vision for where it’s going. I’d attempt to regurgitate it here, but I think the probability of me misquoting would be high.

Suffice it to say, this technology has me excited. Why? Quite frankly, I think PDW will do for data warehousing what SQL Server did for databases, and what Analysis Services did for BI: make it affordable. With a compelling cost-per-terabyte, an attractive scale-out approach, and an entry point at under $1 million, we’ll see more small-to-midsized companies implementing data warehousing and business intelligence. This is good news for those of us looking for an affordable data warehouse solution and for those of us who make our living with SQL Server. And for those of you who might suggest that few companies need a datawarehouse that can support multi-terabyte data, I’d like to point out that just 3 or 4 years ago, 100 GB was considered a lot of data.

I spent most of my week digging into the PDW architecture. It’s not all roses–it’s a first release and, as such, is immature compared to the much older and more established data warehouse systems–but again, it has a lot going for it, not least of all it’s easy integration within a SQL Server environment and the relatively low cost. We’re currently investigating this as a possible data warehouse solution for our business intelligence environment, so expect to see more from me about PDW as I learn more about it.