Calculate Rows Inserted per Second for All Tables

Ever needed to calculate the number of rows inserted every second, for every table in every database on a server? Or, have you ever needed to validate that all processes have stopped writing to tables? These types of questions come up routinely for me. To help with this, I’ve written the following script, which examines metadata values using sys.partitions. This method isn’t as accurate as running SELECT COUNT(*) FROM, but it’s much faster. Keep in mind, since it’s just looking at row counts, it’s not much help on tables that have a lot of update/delete activity. But it does what I need it to do, and I use it pretty regularly, so I thought I’d share in case anyone else can benefit from it too. :)

/* Declare Parameters */
DECLARE @newBaseline BIT = 1 -- change to 0 when you don't want to replace the baseline, i.e. after initial run
  , @delay CHAR(8) = '00:00:30'; -- change as needed
 
IF @newBaseline = 1 
BEGIN
    IF OBJECT_ID('tempdb..#baseline') IS NOT NULL
        DROP TABLE #baseline;
 
    CREATE TABLE #baseline
    (
         database_name  SYSNAME
       , table_name     SYSNAME
       , table_rows     BIGINT
       , captureTime    DATETIME NULL
    );
END
 
IF OBJECT_ID('tempdb..#current') IS NOT NULL
    DROP TABLE #current;
 
CREATE TABLE #current
(
     database_name  SYSNAME
   , table_name     SYSNAME
   , table_rows     BIGINT
   , captureTime    DATETIME NULL
);
 
IF @newBaseline = 1 
BEGIN
    EXECUTE sp_MSforeachdb 'USE ?; 
        INSERT INTO #baseline
        SELECT DB_NAME()
            , o.name As [tableName]
            , SUM(p.[rows]) As [rowCnt]
            , GETDATE() As [captureTime]
        FROM sys.indexes As i
        JOIN sys.partitions As p
            ON i.[object_id] = p.[object_id]
           AND i.index_id  = p.index_id
        JOIN sys.objects As o
            ON i.[object_id] = o.[object_id]
        WHERE i.[type] = 1
        GROUP BY o.name;'
 
    WAITFOR DELAY @delay;
END
 
EXECUTE sp_MSforeachdb 'USE ?; 
INSERT INTO #current
SELECT DB_NAME()
    , o.name As [tableName]
    , SUM(p.[rows]) As [rowCnt]
    , GETDATE() As [captureTime]
FROM sys.indexes As i
JOIN sys.partitions As p
    ON i.[object_id] = p.[object_id]
   AND i.index_id  = p.index_id
JOIN sys.objects As o
    ON i.[object_id] = o.[object_id]
WHERE i.[type] = 1
GROUP BY o.name;'
 
SELECT  c.*
      , c.table_rows - b.table_rows AS 'new_rows'
      , DATEDIFF(second, b.captureTime, c.captureTime) AS 'time_diff'
      , (c.table_rows - b.table_rows) / DATEDIFF(second, b.captureTime, c.captureTime) AS 'rows_per_sec'
FROM #baseline AS b
JOIN #current AS c
    ON b.table_name = c.table_name
   AND b.database_name = c.database_name
ORDER BY new_rows DESC;

Index Defrag Script, v4.1

It’s been quite some time since my last index defrag script update. A big part of the reason for that is because I wanted to implement many of the suggestions I’ve received, but I just haven’t had the time. I still have those changes planned, but I’m not sure quite when I’ll get to it. Rather than continue to wait for a major release, I’m releasing a small update to my defrag that will take care of the most common complaints I receive.

Change Log:

  • Bug fix for databases containing spaces or special characters
  • Support for case-sensitive databases
  • Re-executable CREATE script (for those who want to re-run the whole script)
  • Comma-delimited list of databases is now supported for the @database parameter

Feature List:

  • Defrag a single database, a list of databases, or all databases (@database)
  • Time Limitations: stop defragging after the specified amount of time has elapsed (@timeLimit). Please note, it will not kill a defrag that is currently in process, even if it exceeds the threshold.
  • Optional stop-and-resume functionality: pick up where your defrag last left off without having to rescan sys.dm_db_index_physical_stats. (@forceRescan)
  • Defrag scheduling: choose which days to defrag certain indexes, or exclude certain indexes altogether, by using the dbo.dba_indexDefragExclusion table.
  • Defrag priority: choose whether to defrag indexes in ascending or descending order by range_scan_count (default), fragmentation, or page_count.
  • Current partition exclusion: choose whether or not to exclude the right-most populated partition from the defrag process, common for sliding-window tables (@excludeMaxPartition)
  • Commands-only mode: Choose to just log the current defrag status and print the defrag commands, rather than executing them, by using @executeSQL.
  • … and tons more! Please read the parameter list and notes section for details of all the options available.

FAQ:

I often receive the same questions about this script, so allow me to answer them here:

“I keep running the script, but my index is still fragmented. Why?”
This is most likely a very small index. Here’s what Microsoft has to say:

“In general, fragmentation on small indexes is often not controllable. The pages of small indexes are stored on mixed extents. Mixed extents are shared by up to eight objects, so the fragmentation in a small index might not be reduced after reorganizing or rebuilding the index. For more information about mixed extents, see Understanding Pages and Extents.”

“What database should I create it in?” or “Can I create this in the MASTER database?”
It’s up to you where you create it. You could technically create it in the MASTER database, but I recommend creating a utility database for your DBA administrative tasks.

“Can I run this againt a SharePoint database?”
Yes, you can.

“What are the minimum requirements to run this script?” or “Will this run on SQL Server 2000 instances?”
You need to be on SQL Server 2005 SP2 or higher.

Special thanks to Richard Yanger for his assistance with beta testing. :)

You can download a text file of this script here: dba_indexDefrag_sp_v41

/*** Scroll down to the see important notes, disclaimers, and licensing information ***/
 
/* Let's create our parsing function... */
IF EXISTS ( SELECT  [object_id]
            FROM    sys.objects
            WHERE   name = 'dba_parseString_udf' )
    DROP FUNCTION dbo.dba_parseString_udf;
GO
 
CREATE FUNCTION dbo.dba_parseString_udf
(
          @stringToParse VARCHAR(8000)  
        , @delimiter     CHAR(1)
)
RETURNS @parsedString TABLE (stringValue VARCHAR(128))
AS
/*********************************************************************************
    Name:       dba_parseString_udf
 
    Author:     Michelle Ufford, http://sqlfool.com
 
    Purpose:    This function parses string input using a variable delimiter.
 
    Notes:      Two common delimiter values are space (' ') and comma (',')
 
    Date        Initials    Description
    ----------------------------------------------------------------------------
    2011-05-20  MFU         Initial Release
*********************************************************************************
Usage: 		
    SELECT *
    FROM dba_parseString_udf(<string>, <delimiter>);
 
Test Cases:
 
    1.  multiple strings separated by space
        SELECT * FROM dbo.dba_parseString_udf('  aaa  bbb  ccc ', ' ');
 
    2.  multiple strings separated by comma
        SELECT * FROM dbo.dba_parseString_udf(',aaa,bbb,,,ccc,', ',');
*********************************************************************************/
BEGIN
 
    /* Declare variables */
    DECLARE @trimmedString  VARCHAR(8000);
 
    /* We need to trim our string input in case the user entered extra spaces */
    SET @trimmedString = LTRIM(RTRIM(@stringToParse));
 
    /* Let's create a recursive CTE to break down our string for us */
    WITH parseCTE (StartPos, EndPos)
    AS
    (
        SELECT 1 AS StartPos
            , CHARINDEX(@delimiter, @trimmedString + @delimiter) AS EndPos
        UNION ALL
        SELECT EndPos + 1 AS StartPos
            , CHARINDEX(@delimiter, @trimmedString + @delimiter , EndPos + 1) AS EndPos
        FROM parseCTE
        WHERE CHARINDEX(@delimiter, @trimmedString + @delimiter, EndPos + 1) <> 0
    )
 
    /* Let's take the results and stick it in a table */  
    INSERT INTO @parsedString
    SELECT SUBSTRING(@trimmedString, StartPos, EndPos - StartPos)
    FROM parseCTE
    WHERE LEN(LTRIM(RTRIM(SUBSTRING(@trimmedString, StartPos, EndPos - StartPos)))) > 0
    OPTION (MaxRecursion 8000);
 
    RETURN;   
END
GO
 
/* First, we need to take care of schema updates, in case you have a legacy 
   version of the script installed */
DECLARE @indexDefragLog_rename      VARCHAR(128)
  , @indexDefragExclusion_rename    VARCHAR(128)
  , @indexDefragStatus_rename       VARCHAR(128);
 
SELECT  @indexDefragLog_rename = 'dba_indexDefragLog_obsolete_' + CONVERT(VARCHAR(10), GETDATE(), 112)
      , @indexDefragExclusion_rename = 'dba_indexDefragExclusion_obsolete_' + CONVERT(VARCHAR(10), GETDATE(), 112);
 
IF EXISTS ( SELECT  [object_id]
            FROM    sys.indexes
            WHERE   name = 'PK_indexDefragLog' ) 
    EXECUTE sp_rename dba_indexDefragLog, @indexDefragLog_rename;
 
IF EXISTS ( SELECT  [object_id]
            FROM    sys.indexes
            WHERE   name = 'PK_indexDefragExclusion' ) 
    EXECUTE sp_rename dba_indexDefragExclusion, @indexDefragExclusion_rename;
 
IF NOT EXISTS ( SELECT  [object_id]
                FROM    sys.indexes
                WHERE   name = 'PK_indexDefragLog_v40' )
BEGIN
 
    CREATE TABLE dbo.dba_indexDefragLog
    (
         indexDefrag_id     INT IDENTITY(1, 1)  NOT NULL
       , databaseID         INT                 NOT NULL
       , databaseName       NVARCHAR(128)       NOT NULL
       , objectID           INT                 NOT NULL
       , objectName         NVARCHAR(128)       NOT NULL
       , indexID            INT                 NOT NULL
       , indexName          NVARCHAR(128)       NOT NULL
       , partitionNumber    SMALLINT            NOT NULL
       , fragmentation      FLOAT               NOT NULL
       , page_count         INT                 NOT NULL
       , dateTimeStart      DATETIME            NOT NULL
       , dateTimeEnd        DATETIME            NULL
       , durationSeconds    INT                 NULL
       , sqlStatement       VARCHAR(4000)       NULL
       , errorMessage       VARCHAR(1000)       NULL 
 
        CONSTRAINT PK_indexDefragLog_v40 
            PRIMARY KEY CLUSTERED (indexDefrag_id)
    );
 
    PRINT 'dba_indexDefragLog Table Created';
 
END
 
IF NOT EXISTS ( SELECT  [object_id]
                FROM    sys.indexes
                WHERE   name = 'PK_indexDefragExclusion_v40' )
BEGIN
 
    CREATE TABLE dbo.dba_indexDefragExclusion
    (
         databaseID         INT             NOT NULL
       , databaseName       NVARCHAR(128)   NOT NULL
       , objectID           INT             NOT NULL
       , objectName         NVARCHAR(128)   NOT NULL
       , indexID            INT             NOT NULL
       , indexName          NVARCHAR(128)   NOT NULL
       , exclusionMask      INT             NOT NULL
            /* 1=Sunday, 2=Monday, 4=Tuesday, 8=Wednesday, 16=Thursday, 32=Friday, 64=Saturday */
 
         CONSTRAINT PK_indexDefragExclusion_v40 
            PRIMARY KEY CLUSTERED (databaseID, objectID, indexID)
    );
 
    PRINT 'dba_indexDefragExclusion Table Created';
 
END
 
IF NOT EXISTS ( SELECT  [object_id]
                FROM    sys.indexes
                WHERE   name = 'PK_indexDefragStatus_v40' )
BEGIN
 
    CREATE TABLE dbo.dba_indexDefragStatus
    (
         databaseID         INT             NOT NULL
       , databaseName       NVARCHAR(128)   NOT NULL
       , objectID           INT             NOT NULL
       , indexID            INT             NOT NULL
       , partitionNumber    SMALLINT        NOT NULL
       , fragmentation      FLOAT           NOT NULL
       , page_count         INT             NOT NULL
       , range_scan_count   BIGINT          NOT NULL
       , schemaName         NVARCHAR(128)   NULL
       , objectName         NVARCHAR(128)   NULL
       , indexName          NVARCHAR(128)   NULL
       , scanDate           DATETIME        NOT NULL
       , defragDate         DATETIME        NULL
       , printStatus        BIT DEFAULT (0) NOT NULL
       , exclusionMask      INT DEFAULT (0) NOT NULL
 
        CONSTRAINT PK_indexDefragStatus_v40 
            PRIMARY KEY CLUSTERED (databaseID, objectID, indexID, partitionNumber)
    );
 
    PRINT 'dba_indexDefragStatus Table Created';
 
END;
 
IF OBJECTPROPERTY(OBJECT_ID('dbo.dba_indexDefrag_sp'), N'IsProcedure') = 1 
    BEGIN
        DROP PROCEDURE dbo.dba_indexDefrag_sp;
        PRINT 'Procedure dba_indexDefrag_sp dropped';
    END;
Go
 
CREATE PROCEDURE dbo.dba_indexDefrag_sp
 
    /* Declare Parameters */
    @minFragmentation       FLOAT               = 10.0  
        /* in percent, will not defrag if fragmentation less than specified */
  , @rebuildThreshold       FLOAT               = 30.0  
        /* in percent, greater than @rebuildThreshold will result in rebuild instead of reorg */
  , @executeSQL             BIT                 = 1     
        /* 1 = execute; 0 = print command only */
  , @defragOrderColumn      NVARCHAR(20)        = 'range_scan_count'
        /* Valid options are: range_scan_count, fragmentation, page_count */
  , @defragSortOrder        NVARCHAR(4)         = 'DESC'
        /* Valid options are: ASC, DESC */
  , @timeLimit              INT                 = 720 /* defaulted to 12 hours */
        /* Optional time limitation; expressed in minutes */
  , @database               VARCHAR(128)        = NULL
        /* Option to specify one or more database names, separated by commas; NULL will return all */
  , @tableName              VARCHAR(4000)       = NULL  -- databaseName.schema.tableName
        /* Option to specify a table name; null will return all */
  , @forceRescan            BIT                 = 0
        /* Whether or not to force a rescan of indexes; 1 = force, 0 = use existing scan, if available */
  , @scanMode               VARCHAR(10)         = N'LIMITED'
        /* Options are LIMITED, SAMPLED, and DETAILED */
  , @minPageCount           INT                 = 8 
        /*  MS recommends > 1 extent (8 pages) */
  , @maxPageCount           INT                 = NULL
        /* NULL = no limit */
  , @excludeMaxPartition    BIT                 = 0
        /* 1 = exclude right-most populated partition; 0 = do not exclude; see notes for caveats */
  , @onlineRebuild          BIT                 = 1     
        /* 1 = online rebuild; 0 = offline rebuild; only in Enterprise */
  , @sortInTempDB           BIT                 = 1
        /* 1 = perform sort operation in TempDB; 0 = perform sort operation in the index's database */
  , @maxDopRestriction      TINYINT             = NULL
        /* Option to restrict the number of processors for the operation; only in Enterprise */
  , @printCommands          BIT                 = 0     
        /* 1 = print commands; 0 = do not print commands */
  , @printFragmentation     BIT                 = 0
        /* 1 = print fragmentation prior to defrag; 
           0 = do not print */
  , @defragDelay            CHAR(8)             = '00:00:05'
        /* time to wait between defrag commands */
  , @debugMode              BIT                 = 0
        /* display some useful comments to help determine if/WHERE issues occur */
AS /*********************************************************************************
    Name:       dba_indexDefrag_sp
 
    Author:     Michelle Ufford, http://sqlfool.com
 
    Purpose:    Defrags one or more indexes for one or more databases
 
    Notes:
 
    CAUTION: TRANSACTION LOG SIZE SHOULD BE MONITORED CLOSELY WHEN DEFRAGMENTING.
             DO NOT RUN UNATTENDED ON LARGE DATABASES DURING BUSINESS HOURS.
 
      @minFragmentation     defaulted to 10%, will not defrag if fragmentation 
                            is less than that
 
      @rebuildThreshold     defaulted to 30% AS recommended by Microsoft in BOL;
                            greater than 30% will result in rebuild instead
 
      @executeSQL           1 = execute the SQL generated by this proc; 
                            0 = print command only
 
      @defragOrderColumn    Defines how to prioritize the order of defrags.  Only
                            used if @executeSQL = 1.  
                            Valid options are: 
                            range_scan_count = count of range and table scans on the
                                               index; in general, this is what benefits 
                                               the most FROM defragmentation
                            fragmentation    = amount of fragmentation in the index;
                                               the higher the number, the worse it is
                            page_count       = number of pages in the index; affects
                                               how long it takes to defrag an index
 
      @defragSortOrder      The sort order of the ORDER BY clause.
                            Valid options are ASC (ascending) or DESC (descending).
 
      @timeLimit            Optional, limits how much time can be spent performing 
                            index defrags; expressed in minutes.
 
                            NOTE: The time limit is checked BEFORE an index defrag
                                  is begun, thus a long index defrag can exceed the
                                  time limitation.
 
      @database             Optional, specify specific database name to defrag;
                            If not specified, all non-system databases will
                            be defragged.
 
      @tableName            Specify if you only want to defrag indexes for a 
                            specific table, format = databaseName.schema.tableName;
                            if not specified, all tables will be defragged.
 
      @forceRescan          Whether or not to force a rescan of indexes.  If set
                            to 0, a rescan will not occur until all indexes have
                            been defragged.  This can span multiple executions.
                            1 = force a rescan
                            0 = use previous scan, if there are indexes left to defrag
 
      @scanMode             Specifies which scan mode to use to determine
                            fragmentation levels.  Options are:
                            LIMITED - scans the parent level; quickest mode,
                                      recommended for most cases.
                            SAMPLED - samples 1% of all data pages; if less than
                                      10k pages, performs a DETAILED scan.
                            DETAILED - scans all data pages.  Use great care with
                                       this mode, AS it can cause performance issues.
 
      @minPageCount         Specifies how many pages must exist in an index in order 
                            to be considered for a defrag.  Defaulted to 8 pages, AS 
                            Microsoft recommends only defragging indexes with more 
                            than 1 extent (8 pages).  
 
                            NOTE: The @minPageCount will restrict the indexes that
                            are stored in dba_indexDefragStatus table.
 
      @maxPageCount         Specifies the maximum number of pages that can exist in 
                            an index and still be considered for a defrag.  Useful
                            for scheduling small indexes during business hours and
                            large indexes for non-business hours.
 
                            NOTE: The @maxPageCount will restrict the indexes that
                            are defragged during the current operation; it will not
                            prevent indexes FROM being stored in the 
                            dba_indexDefragStatus table.  This way, a single scan
                            can support multiple page count thresholds.
 
      @excludeMaxPartition  If an index is partitioned, this option specifies whether
                            to exclude the right-most populated partition.  Typically,
                            this is the partition that is currently being written to in
                            a sliding-window scenario.  Enabling this feature may reduce
                            contention.  This may not be applicable in other types of 
                            partitioning scenarios.  Non-partitioned indexes are 
                            unaffected by this option.
                            1 = exclude right-most populated partition
                            0 = do not exclude
 
      @onlineRebuild        1 = online rebuild; 
                            0 = offline rebuild
 
      @sortInTempDB         Specifies whether to defrag the index in TEMPDB or in the
                            database the index belongs to.  Enabling this option may
                            result in faster defrags and prevent database file size 
                            inflation.
                            1 = perform sort operation in TempDB
                            0 = perform sort operation in the index's database 
 
      @maxDopRestriction    Option to specify a processor limit for index rebuilds
 
      @printCommands        1 = print commands to screen; 
                            0 = do not print commands
 
      @printFragmentation   1 = print fragmentation to screen;
                            0 = do not print fragmentation
 
      @defragDelay          Time to wait between defrag commands; gives the
                            server a little time to catch up 
 
      @debugMode            1 = display debug comments; helps with troubleshooting
                            0 = do not display debug comments
 
    Called by:  SQL Agent Job or DBA
 
    ----------------------------------------------------------------------------
    DISCLAIMER: 
    This code and information are provided "AS IS" without warranty of any kind,
    either expressed or implied, including but not limited to the implied 
    warranties or merchantability and/or fitness for a particular purpose.
    ----------------------------------------------------------------------------
    LICENSE: 
    This index defrag script is free to download and use for personal, educational, 
    and internal corporate purposes, provided that this header is preserved. 
    Redistribution or sale of this index defrag script, in whole or in part, is 
    prohibited without the author's express written consent.
    ----------------------------------------------------------------------------
    Date        Initials	Version Description
    ----------------------------------------------------------------------------
    2007-12-18  MFU         1.0     Initial Release
    2008-10-17  MFU         1.1     Added @defragDelay, CIX_temp_indexDefragList
    2008-11-17  MFU         1.2     Added page_count to log table
                                    , added @printFragmentation option
    2009-03-17  MFU         2.0     Provided support for centralized execution
                                    , consolidated Enterprise & Standard versions
                                    , added @debugMode, @maxDopRestriction
                                    , modified LOB and partition logic  
    2009-06-18  MFU         3.0     Fixed bug in LOB logic, added @scanMode option
                                    , added support for stat rebuilds (@rebuildStats)
                                    , support model and msdb defrag
                                    , added columns to the dba_indexDefragLog table
                                    , modified logging to show "in progress" defrags
                                    , added defrag exclusion list (scheduling)
    2009-08-28  MFU         3.1     Fixed read_only bug for database lists
    2010-04-20  MFU         4.0     Added time limit option
                                    , added static table with rescan logic
                                    , added parameters for page count & SORT_IN_TEMPDB
                                    , added try/catch logic and additional debug options
                                    , added options for defrag prioritization
                                    , fixed bug for indexes with allow_page_lock = off
                                    , added option to exclude right-most partition
                                    , removed @rebuildStats option
                                    , refer to http://sqlfool.com for full release notes
    2011-04-28  MFU         4.1     Bug fixes for databases requiring []
                                    , cleaned up the create table section
                                    , updated syntax for case-sensitive databases
                                    , comma-delimited list for @database now supported
*********************************************************************************
    Example of how to call this script:
 
        EXECUTE dbo.dba_indexDefrag_sp
              @executeSQL           = 1
            , @printCommands        = 1
            , @debugMode            = 1
            , @printFragmentation   = 1
            , @forceRescan          = 1
            , @maxDopRestriction    = 1
            , @minPageCount         = 8
            , @maxPageCount         = NULL
            , @minFragmentation     = 1
            , @rebuildThreshold     = 30
            , @defragDelay          = '00:00:05'
            , @defragOrderColumn    = 'page_count'
            , @defragSortOrder      = 'DESC'
            , @excludeMaxPartition  = 1
            , @timeLimit            = NULL
            , @database             = 'sandbox,sandbox_caseSensitive';
*********************************************************************************/																
SET NOCOUNT ON;
SET XACT_ABORT ON;
SET QUOTED_IDENTIFIER ON;
 
BEGIN
 
    BEGIN TRY
 
        /* Just a little validation... */
        IF @minFragmentation IS NULL 
            OR @minFragmentation NOT BETWEEN 0.00 AND 100.0
                SET @minFragmentation = 10.0;
 
        IF @rebuildThreshold IS NULL
            OR @rebuildThreshold NOT BETWEEN 0.00 AND 100.0
                SET @rebuildThreshold = 30.0;
 
        IF @defragDelay NOT LIKE '00:[0-5][0-9]:[0-5][0-9]'
            SET @defragDelay = '00:00:05';
 
        IF @defragOrderColumn IS NULL
            OR @defragOrderColumn NOT IN ('range_scan_count', 'fragmentation', 'page_count')
                SET @defragOrderColumn = 'range_scan_count';
 
        IF @defragSortOrder IS NULL
            OR @defragSortOrder NOT IN ('ASC', 'DESC')
                SET @defragSortOrder = 'DESC';
 
        IF @scanMode NOT IN ('LIMITED', 'SAMPLED', 'DETAILED')
            SET @scanMode = 'LIMITED';
 
        IF @debugMode IS NULL
            SET @debugMode = 0;
 
        IF @forceRescan IS NULL
            SET @forceRescan = 0;
 
        IF @sortInTempDB IS NULL
            SET @sortInTempDB = 1;
 
 
        IF @debugMode = 1 RAISERROR('Undusting the cogs AND starting up...', 0, 42) WITH NOWAIT;
 
        /* Declare our variables */
        DECLARE   @objectID                 INT
                , @databaseID               INT
                , @databaseName             NVARCHAR(128)
                , @indexID                  INT
                , @partitionCount           BIGINT
                , @schemaName               NVARCHAR(128)
                , @objectName               NVARCHAR(128)
                , @indexName                NVARCHAR(128)
                , @partitionNumber          SMALLINT
                , @fragmentation            FLOAT
                , @pageCount                INT
                , @sqlCommand               NVARCHAR(4000)
                , @rebuildCommand           NVARCHAR(200)
                , @datetimestart            DATETIME
                , @dateTimeEnd              DATETIME
                , @containsLOB              BIT
                , @editionCheck             BIT
                , @debugMessage             NVARCHAR(4000)
                , @updateSQL                NVARCHAR(4000)
                , @partitionSQL             NVARCHAR(4000)
                , @partitionSQL_Param       NVARCHAR(1000)
                , @LOB_SQL                  NVARCHAR(4000)
                , @LOB_SQL_Param            NVARCHAR(1000)
                , @indexDefrag_id           INT
                , @startdatetime            DATETIME
                , @enddatetime              DATETIME
                , @getIndexSQL              NVARCHAR(4000)
                , @getIndexSQL_Param        NVARCHAR(4000)
                , @allowPageLockSQL         NVARCHAR(4000)
                , @allowPageLockSQL_Param   NVARCHAR(4000)
                , @allowPageLocks           INT
                , @excludeMaxPartitionSQL   NVARCHAR(4000);
 
        /* Initialize our variables */
        SELECT @startdatetime = GETDATE()
            , @enddatetime = DATEADD(minute, @timeLimit, GETDATE());
 
        /* Create our temporary tables */
        CREATE TABLE #databaseList
        (
              databaseID        INT
            , databaseName      VARCHAR(128)
            , scanStatus        BIT
        );
 
        CREATE TABLE #processor 
        (
              [index]           INT
            , Name              VARCHAR(128)
            , Internal_Value    INT
            , Character_Value   INT
        );
 
        CREATE TABLE #maxPartitionList
        (
              databaseID        INT
            , objectID          INT
            , indexID           INT
            , maxPartition      INT
        );
 
        IF @debugMode = 1 RAISERROR('Beginning validation...', 0, 42) WITH NOWAIT;
 
        /* Make sure we're not exceeding the number of processors we have available */
        INSERT INTO #processor
        EXECUTE xp_msver 'ProcessorCount';
 
        IF @maxDopRestriction IS NOT NULL AND @maxDopRestriction > (SELECT Internal_Value FROM #processor)
            SELECT @maxDopRestriction = Internal_Value
            FROM #processor;
 
        /* Check our server version; 1804890536 = Enterprise, 610778273 = Enterprise Evaluation, -2117995310 = Developer */
        IF (SELECT ServerProperty('EditionID')) IN (1804890536, 610778273, -2117995310) 
            SET @editionCheck = 1 -- supports online rebuilds
        ELSE
            SET @editionCheck = 0; -- does not support online rebuilds
 
        /* Output the parameters we're working with */
        IF @debugMode = 1 
        BEGIN
 
            SELECT @debugMessage = 'Your SELECTed parameters are... 
            Defrag indexes WITH fragmentation greater than ' + CAST(@minFragmentation AS VARCHAR(10)) + ';
            REBUILD indexes WITH fragmentation greater than ' + CAST(@rebuildThreshold AS VARCHAR(10)) + ';
            You' + CASE WHEN @executeSQL = 1 THEN ' DO' ELSE ' DO NOT' END + ' want the commands to be executed automatically; 
            You want to defrag indexes in ' + @defragSortOrder + ' order of the ' + UPPER(@defragOrderColumn) + ' value;
            You have' + CASE WHEN @timeLimit IS NULL THEN ' NOT specified a time limit;' ELSE ' specified a time limit of ' 
                + CAST(@timeLimit AS VARCHAR(10)) END + ' minutes;
            ' + CASE WHEN @database IS NULL THEN 'ALL databases' ELSE 'The ' + @database + ' database(s)' END + ' will be defragged;
            ' + CASE WHEN @tableName IS NULL THEN 'ALL tables' ELSE 'The ' + @tableName + ' TABLE' END + ' will be defragged;
            We' + CASE WHEN EXISTS(SELECT Top 1 * FROM dbo.dba_indexDefragStatus WHERE defragDate IS NULL)
                AND @forceRescan <> 1 THEN ' WILL NOT' ELSE ' WILL' END + ' be rescanning indexes;
            The scan will be performed in ' + @scanMode + ' mode;
            You want to limit defrags to indexes with' + CASE WHEN @maxPageCount IS NULL THEN ' more than ' 
                + CAST(@minPageCount AS VARCHAR(10)) ELSE
                ' BETWEEN ' + CAST(@minPageCount AS VARCHAR(10))
                + ' AND ' + CAST(@maxPageCount AS VARCHAR(10)) END + ' pages;
            Indexes will be defragged' + CASE WHEN @editionCheck = 0 OR @onlineRebuild = 0 THEN ' OFFLINE;' ELSE ' ONLINE;' END + '
            Indexes will be sorted in' + CASE WHEN @sortInTempDB = 0 THEN ' the DATABASE' ELSE ' TEMPDB;' END + '
            Defrag operations will utilize ' + CASE WHEN @editionCheck = 0 OR @maxDopRestriction IS NULL 
                THEN 'system defaults for processors;' 
                ELSE CAST(@maxDopRestriction AS VARCHAR(2)) + ' processors;' END + '
            You' + CASE WHEN @printCommands = 1 THEN ' DO' ELSE ' DO NOT' END + ' want to PRINT the ALTER INDEX commands; 
            You' + CASE WHEN @printFragmentation = 1 THEN ' DO' ELSE ' DO NOT' END + ' want to OUTPUT fragmentation levels; 
            You want to wait ' + @defragDelay + ' (hh:mm:ss) BETWEEN defragging indexes;
            You want to run in' + CASE WHEN @debugMode = 1 THEN ' DEBUG' ELSE ' SILENT' END + ' mode.';
 
            RAISERROR(@debugMessage, 0, 42) WITH NOWAIT;
 
        END;
 
        IF @debugMode = 1 RAISERROR('Grabbing a list of our databases...', 0, 42) WITH NOWAIT;
 
        /* Retrieve the list of databases to investigate */
        /* If @database is NULL, it means we want to defrag *all* databases */
        IF @database IS NULL
        BEGIN
 
            INSERT INTO #databaseList
            SELECT database_id
                , name
                , 0 -- not scanned yet for fragmentation
            FROM sys.databases
            WHERE [name] NOT IN ('master', 'tempdb')-- exclude system databases
                AND [state] = 0 -- state must be ONLINE
                AND is_read_only = 0;  -- cannot be read_only
 
        END;
        ELSE
        /* Otherwise, we're going to just defrag our list of databases */
        BEGIN
 
            INSERT INTO #databaseList
            SELECT database_id
                , name
                , 0 -- not scanned yet for fragmentation
            FROM sys.databases AS d
            JOIN dbo.dba_parseString_udf(@database, ',') AS x
                ON d.name = x.stringValue
            WHERE [name] NOT IN ('master', 'tempdb')-- exclude system databases
                AND [state] = 0 -- state must be ONLINE
                AND is_read_only = 0;  -- cannot be read_only
 
        END; 
 
        /* Check to see IF we have indexes in need of defrag; otherwise, re-scan the database(s) */
        IF NOT EXISTS(SELECT Top 1 * FROM dbo.dba_indexDefragStatus WHERE defragDate IS NULL)
            OR @forceRescan = 1
        BEGIN
 
            /* Truncate our list of indexes to prepare for a new scan */
            TRUNCATE TABLE dbo.dba_indexDefragStatus;
 
            IF @debugMode = 1 RAISERROR('Looping through our list of databases and checking for fragmentation...', 0, 42) WITH NOWAIT;
 
            /* Loop through our list of databases */
            WHILE (SELECT COUNT(*) FROM #databaseList WHERE scanStatus = 0) > 0
            BEGIN
 
                SELECT Top 1 @databaseID = databaseID
                FROM #databaseList
                WHERE scanStatus = 0;
 
                SELECT @debugMessage = '  working on ' + DB_NAME(@databaseID) + '...';
 
                IF @debugMode = 1
                    RAISERROR(@debugMessage, 0, 42) WITH NOWAIT;
 
               /* Determine which indexes to defrag using our user-defined parameters */
                INSERT INTO dbo.dba_indexDefragStatus
                (
                      databaseID
                    , databaseName
                    , objectID
                    , indexID
                    , partitionNumber
                    , fragmentation
                    , page_count
                    , range_scan_count
                    , scanDate
                )
                SELECT
                      ps.database_id AS 'databaseID'
                    , QUOTENAME(DB_NAME(ps.database_id)) AS 'databaseName'
                    , ps.[object_id] AS 'objectID'
                    , ps.index_id AS 'indexID'
                    , ps.partition_number AS 'partitionNumber'
                    , SUM(ps.avg_fragmentation_in_percent) AS 'fragmentation'
                    , SUM(ps.page_count) AS 'page_count'
                    , os.range_scan_count
                    , GETDATE() AS 'scanDate'
                FROM sys.dm_db_index_physical_stats(@databaseID, OBJECT_ID(@tableName), NULL , NULL, @scanMode) AS ps
                JOIN sys.dm_db_index_operational_stats(@databaseID, OBJECT_ID(@tableName), NULL , NULL) AS os
                    ON ps.database_id = os.database_id
                    AND ps.[object_id] = os.[object_id]
                    AND ps.index_id = os.index_id
                    AND ps.partition_number = os.partition_number
                WHERE avg_fragmentation_in_percent >= @minFragmentation 
                    AND ps.index_id > 0 -- ignore heaps
                    AND ps.page_count > @minPageCount 
                    AND ps.index_level = 0 -- leaf-level nodes only, supports @scanMode
                GROUP BY ps.database_id 
                    , QUOTENAME(DB_NAME(ps.database_id)) 
                    , ps.[object_id]
                    , ps.index_id 
                    , ps.partition_number 
                    , os.range_scan_count
                OPTION (MAXDOP 2);
 
                /* Do we want to exclude right-most populated partition of our partitioned indexes? */
                IF @excludeMaxPartition = 1
                BEGIN
 
                    SET @excludeMaxPartitionSQL = '
                        SELECT ' + CAST(@databaseID AS VARCHAR(10)) + ' AS [databaseID]
                            , [object_id]
                            , index_id
                            , MAX(partition_number) AS [maxPartition]
                        FROM [' + DB_NAME(@databaseID) + '].sys.partitions
                        WHERE partition_number > 1
                            AND [rows] > 0
                        GROUP BY object_id
                            , index_id;';
 
                    INSERT INTO #maxPartitionList
                    EXECUTE sp_executesql @excludeMaxPartitionSQL;
 
                END;
 
                /* Keep track of which databases have already been scanned */
                UPDATE #databaseList
                SET scanStatus = 1
                WHERE databaseID = @databaseID;
 
            END
 
            /* We don't want to defrag the right-most populated partition, so
               delete any records for partitioned indexes where partition = MAX(partition) */
            IF @excludeMaxPartition = 1
            BEGIN
 
                DELETE ids
                FROM dbo.dba_indexDefragStatus AS ids
                JOIN #maxPartitionList AS mpl
                    ON ids.databaseID = mpl.databaseID
                    AND ids.objectID = mpl.objectID
                    AND ids.indexID = mpl.indexID
                    AND ids.partitionNumber = mpl.maxPartition;
 
            END;
 
            /* Update our exclusion mask for any index that has a restriction ON the days it can be defragged */
            UPDATE ids
            SET ids.exclusionMask = ide.exclusionMask
            FROM dbo.dba_indexDefragStatus AS ids
            JOIN dbo.dba_indexDefragExclusion AS ide
                ON ids.databaseID = ide.databaseID
                AND ids.objectID = ide.objectID
                AND ids.indexID = ide.indexID;
 
        END
 
        SELECT @debugMessage = 'Looping through our list... there are ' + CAST(COUNT(*) AS VARCHAR(10)) + ' indexes to defrag!'
        FROM dbo.dba_indexDefragStatus
        WHERE defragDate IS NULL
            AND page_count BETWEEN @minPageCount AND ISNULL(@maxPageCount, page_count);
 
        IF @debugMode = 1 RAISERROR(@debugMessage, 0, 42) WITH NOWAIT;
 
        /* Begin our loop for defragging */
        WHILE (SELECT COUNT(*) 
               FROM dbo.dba_indexDefragStatus 
               WHERE (
                           (@executeSQL = 1 AND defragDate IS NULL) 
                        OR (@executeSQL = 0 AND defragDate IS NULL AND printStatus = 0)
                     )
                AND exclusionMask & POWER(2, DATEPART(weekday, GETDATE())-1) = 0
                AND page_count BETWEEN @minPageCount AND ISNULL(@maxPageCount, page_count)) > 0
        BEGIN
 
            /* Check to see IF we need to exit our loop because of our time limit */        
            IF ISNULL(@enddatetime, GETDATE()) < GETDATE()
            BEGIN
                RAISERROR('Our time limit has been exceeded!', 11, 42) WITH NOWAIT;
            END;
 
            IF @debugMode = 1 RAISERROR('  Picking an index to beat into shape...', 0, 42) WITH NOWAIT;
 
            /* Grab the index with the highest priority, based on the values submitted; 
               Look at the exclusion mask to ensure it can be defragged today */
            SET @getIndexSQL = N'
            SELECT TOP 1 
                  @objectID_Out         = objectID
                , @indexID_Out          = indexID
                , @databaseID_Out       = databaseID
                , @databaseName_Out     = databaseName
                , @fragmentation_Out    = fragmentation
                , @partitionNumber_Out  = partitionNumber
                , @pageCount_Out        = page_count
            FROM dbo.dba_indexDefragStatus
            WHERE defragDate IS NULL ' 
                + CASE WHEN @executeSQL = 0 THEN 'AND printStatus = 0' ELSE '' END + '
                AND exclusionMask & Power(2, DatePart(weekday, GETDATE())-1) = 0
                AND page_count BETWEEN @p_minPageCount AND ISNULL(@p_maxPageCount, page_count)
            ORDER BY + ' + @defragOrderColumn + ' ' + @defragSortOrder;
 
            SET @getIndexSQL_Param = N'@objectID_Out        INT OUTPUT
                                     , @indexID_Out         INT OUTPUT
                                     , @databaseID_Out      INT OUTPUT
                                     , @databaseName_Out    NVARCHAR(128) OUTPUT
                                     , @fragmentation_Out   INT OUTPUT
                                     , @partitionNumber_Out INT OUTPUT
                                     , @pageCount_Out       INT OUTPUT
                                     , @p_minPageCount      INT
                                     , @p_maxPageCount      INT';
 
            EXECUTE sp_executesql @getIndexSQL
                , @getIndexSQL_Param
                , @p_minPageCount       = @minPageCount
                , @p_maxPageCount       = @maxPageCount
                , @objectID_Out         = @objectID         OUTPUT
                , @indexID_Out          = @indexID          OUTPUT
                , @databaseID_Out       = @databaseID       OUTPUT
                , @databaseName_Out     = @databaseName     OUTPUT
                , @fragmentation_Out    = @fragmentation    OUTPUT
                , @partitionNumber_Out  = @partitionNumber  OUTPUT
                , @pageCount_Out        = @pageCount        OUTPUT;
 
            IF @debugMode = 1 RAISERROR('  Looking up the specifics for our index...', 0, 42) WITH NOWAIT;
 
            /* Look up index information */
            SELECT @updateSQL = N'UPDATE ids
                SET schemaName = QUOTENAME(s.name)
                    , objectName = QUOTENAME(o.name)
                    , indexName = QUOTENAME(i.name)
                FROM dbo.dba_indexDefragStatus AS ids
                INNER JOIN ' + @databaseName + '.sys.objects AS o
                    ON ids.objectID = o.[object_id]
                INNER JOIN ' + @databaseName + '.sys.indexes AS i
                    ON o.[object_id] = i.[object_id]
                    AND ids.indexID = i.index_id
                INNER JOIN ' + @databaseName + '.sys.schemas AS s
                    ON o.schema_id = s.schema_id
                WHERE o.[object_id] = ' + CAST(@objectID AS VARCHAR(10)) + '
                    AND i.index_id = ' + CAST(@indexID AS VARCHAR(10)) + '
                    AND i.type > 0
                    AND ids.databaseID = ' + CAST(@databaseID AS VARCHAR(10));
 
            EXECUTE sp_executesql @updateSQL;
 
            /* Grab our object names */
            SELECT @objectName  = objectName
                , @schemaName   = schemaName
                , @indexName    = indexName
            FROM dbo.dba_indexDefragStatus
            WHERE objectID = @objectID
                AND indexID = @indexID
                AND databaseID = @databaseID;
 
            IF @debugMode = 1 RAISERROR('  Grabbing the partition COUNT...', 0, 42) WITH NOWAIT;
 
            /* Determine if the index is partitioned */
            SELECT @partitionSQL = 'SELECT @partitionCount_OUT = COUNT(*)
                                        FROM ' + @databaseName + '.sys.partitions
                                        WHERE object_id = ' + CAST(@objectID AS VARCHAR(10)) + '
                                            AND index_id = ' + CAST(@indexID AS VARCHAR(10)) + ';'
                , @partitionSQL_Param = '@partitionCount_OUT INT OUTPUT';
 
            EXECUTE sp_executesql @partitionSQL, @partitionSQL_Param, @partitionCount_OUT = @partitionCount OUTPUT;
 
            IF @debugMode = 1 RAISERROR('  Seeing IF there are any LOBs to be handled...', 0, 42) WITH NOWAIT;
 
            /* Determine if the table contains LOBs */
            SELECT @LOB_SQL = ' SELECT @containsLOB_OUT = COUNT(*)
                                FROM ' + @databaseName + '.sys.columns WITH (NoLock) 
                                WHERE [object_id] = ' + CAST(@objectID AS VARCHAR(10)) + '
                                   AND (system_type_id IN (34, 35, 99)
                                            OR max_length = -1);'
                                /*  system_type_id --> 34 = IMAGE, 35 = TEXT, 99 = NTEXT
                                    max_length = -1 --> VARBINARY(MAX), VARCHAR(MAX), NVARCHAR(MAX), XML */
                    , @LOB_SQL_Param = '@containsLOB_OUT INT OUTPUT';
 
            EXECUTE sp_executesql @LOB_SQL, @LOB_SQL_Param, @containsLOB_OUT = @containsLOB OUTPUT;
 
            IF @debugMode = 1 RAISERROR('  Checking for indexes that do NOT allow page locks...', 0, 42) WITH NOWAIT;
 
            /* Determine if page locks are allowed; for those indexes, we need to always REBUILD */
            SELECT @allowPageLockSQL = 'SELECT @allowPageLocks_OUT = COUNT(*)
                                        FROM ' + @databaseName + '.sys.indexes
                                        WHERE object_id = ' + CAST(@objectID AS VARCHAR(10)) + '
                                            AND index_id = ' + CAST(@indexID AS VARCHAR(10)) + '
                                            AND Allow_Page_Locks = 0;'
                , @allowPageLockSQL_Param = '@allowPageLocks_OUT INT OUTPUT';
 
            EXECUTE sp_executesql @allowPageLockSQL, @allowPageLockSQL_Param, @allowPageLocks_OUT = @allowPageLocks OUTPUT;
 
            IF @debugMode = 1 RAISERROR('  Building our SQL statements...', 0, 42) WITH NOWAIT;
 
            /* IF there's not a lot of fragmentation, or if we have a LOB, we should REORGANIZE */
            IF (@fragmentation < @rebuildThreshold OR @containsLOB >= 1 OR @partitionCount > 1)
                AND @allowPageLocks = 0
            BEGIN
 
                SET @sqlCommand = N'ALTER INDEX ' + @indexName + N' ON ' + @databaseName + N'.' 
                                    + @schemaName + N'.' + @objectName + N' REORGANIZE';
 
                /* If our index is partitioned, we should always REORGANIZE */
                IF @partitionCount > 1
                    SET @sqlCommand = @sqlCommand + N' PARTITION = ' 
                                    + CAST(@partitionNumber AS NVARCHAR(10));
 
            END
            /* If the index is heavily fragmented and doesn't contain any partitions or LOB's, 
               or if the index does not allow page locks, REBUILD it */
            ELSE IF (@fragmentation >= @rebuildThreshold OR @allowPageLocks <> 0)
                AND ISNULL(@containsLOB, 0) != 1 AND @partitionCount <= 1
            BEGIN
 
                /* Set online REBUILD options; requires Enterprise Edition */
                IF @onlineRebuild = 1 AND @editionCheck = 1 
                    SET @rebuildCommand = N' REBUILD WITH (ONLINE = ON';
                ELSE
                    SET @rebuildCommand = N' REBUILD WITH (ONLINE = Off';
 
                /* Set sort operation preferences */
                IF @sortInTempDB = 1 
                    SET @rebuildCommand = @rebuildCommand + N', SORT_IN_TEMPDB = ON';
                ELSE
                    SET @rebuildCommand = @rebuildCommand + N', SORT_IN_TEMPDB = Off';
 
                /* Set processor restriction options; requires Enterprise Edition */
                IF @maxDopRestriction IS NOT NULL AND @editionCheck = 1
                    SET @rebuildCommand = @rebuildCommand + N', MAXDOP = ' + CAST(@maxDopRestriction AS VARCHAR(2)) + N')';
                ELSE
                    SET @rebuildCommand = @rebuildCommand + N')';
 
                SET @sqlCommand = N'ALTER INDEX ' + @indexName + N' ON ' + @databaseName + N'.'
                                + @schemaName + N'.' + @objectName + @rebuildCommand;
 
            END
            ELSE
                /* Print an error message if any indexes happen to not meet the criteria above */
                IF @printCommands = 1 OR @debugMode = 1
                    RAISERROR('We are unable to defrag this index.', 0, 42) WITH NOWAIT;
 
            /* Are we executing the SQL?  IF so, do it */
            IF @executeSQL = 1
            BEGIN
 
                SET @debugMessage = 'Executing: ' + @sqlCommand;
 
                /* Print the commands we're executing if specified to do so */
                IF @printCommands = 1 OR @debugMode = 1
                    RAISERROR(@debugMessage, 0, 42) WITH NOWAIT;
 
                /* Grab the time for logging purposes */
                SET @datetimestart  = GETDATE();
 
                /* Log our actions */
                INSERT INTO dbo.dba_indexDefragLog
                (
                      databaseID
                    , databaseName
                    , objectID
                    , objectName
                    , indexID
                    , indexName
                    , partitionNumber
                    , fragmentation
                    , page_count
                    , DATETIMEStart
                    , sqlStatement
                )
                SELECT
                      @databaseID
                    , @databaseName
                    , @objectID
                    , @objectName
                    , @indexID
                    , @indexName
                    , @partitionNumber
                    , @fragmentation
                    , @pageCount
                    , @datetimestart
                    , @sqlCommand;
 
                SET @indexDefrag_id = SCOPE_IDENTITY();
 
                /* Wrap our execution attempt in a TRY/CATCH and log any errors that occur */
                BEGIN TRY
 
                    /* Execute our defrag! */
                    EXECUTE sp_executesql @sqlCommand;
                    SET @dateTimeEnd = GETDATE();
 
                    /* Update our log with our completion time */
                    UPDATE dbo.dba_indexDefragLog
                    SET dateTimeEnd = @dateTimeEnd
                        , durationSeconds = DATEDIFF(second, @datetimestart, @dateTimeEnd)
                    WHERE indexDefrag_id = @indexDefrag_id;
 
                END TRY
                BEGIN CATCH
 
                    /* Update our log with our error message */
                    UPDATE dbo.dba_indexDefragLog
                    SET dateTimeEnd = GETDATE()
                        , durationSeconds = -1
                        , errorMessage = ERROR_MESSAGE()
                    WHERE indexDefrag_id = @indexDefrag_id;
 
                    IF @debugMode = 1 
                        RAISERROR('  An error has occurred executing this command! Please review the dba_indexDefragLog table for details.'
                            , 0, 42) WITH NOWAIT;
 
                END CATCH
 
                /* Just a little breather for the server */
                WAITFOR DELAY @defragDelay;
 
                UPDATE dbo.dba_indexDefragStatus
                SET defragDate = GETDATE()
                    , printStatus = 1
                WHERE databaseID       = @databaseID
                  AND objectID         = @objectID
                  AND indexID          = @indexID
                  AND partitionNumber  = @partitionNumber;
 
            END
            ELSE
            /* Looks like we're not executing, just printing the commands */
            BEGIN
                IF @debugMode = 1 RAISERROR('  Printing SQL statements...', 0, 42) WITH NOWAIT;
 
                IF @printCommands = 1 OR @debugMode = 1 
                    PRINT ISNULL(@sqlCommand, 'error!');
 
                UPDATE dbo.dba_indexDefragStatus
                SET printStatus = 1
                WHERE databaseID       = @databaseID
                  AND objectID         = @objectID
                  AND indexID          = @indexID
                  AND partitionNumber  = @partitionNumber;
            END
 
        END
 
        /* Do we want to output our fragmentation results? */
        IF @printFragmentation = 1
        BEGIN
 
            IF @debugMode = 1 RAISERROR('  Displaying a summary of our action...', 0, 42) WITH NOWAIT;
 
            SELECT databaseID
                , databaseName
                , objectID
                , objectName
                , indexID
                , indexName
                , partitionNumber
                , fragmentation
                , page_count
                , range_scan_count
            FROM dbo.dba_indexDefragStatus
            WHERE defragDate >= @startdatetime
            ORDER BY defragDate;
 
        END;
 
    END TRY
    BEGIN CATCH
 
        SET @debugMessage = ERROR_MESSAGE() + ' (Line Number: ' + CAST(ERROR_LINE() AS VARCHAR(10)) + ')';
        PRINT @debugMessage;
 
    END CATCH;
 
    /* When everything is said and done, make sure to get rid of our temp table */
    DROP TABLE #databaseList;
    DROP TABLE #processor;
    DROP TABLE #maxPartitionList;
 
    IF @debugMode = 1 RAISERROR('DONE!  Thank you for taking care of your indexes!  :)', 0, 42) WITH NOWAIT;
 
    SET NOCOUNT OFF;
    RETURN 0;
END

T-SQL Script for Estimating Compression Savings

A couple of weeks ago, I was working on a Microsoft PDW proof-of-concept (POC) and had to measure compression ratios. In order to do this, I fired up SSMS and wrote a little script. The script will iterate through all tables in a database and run the sp_estimate_data_compression_savings stored procedure. This will only work in SQL Server 2008+ versions running Enterprise edition.

If you’re not familiar with this stored procedure, it basically will tell you what effect PAGE or ROW compression will have on your table/index/partition, etc. There are pro’s and con’s with compression. What I’ve tended to see is that compression has very positive results on space, IO, and query duration, with a negative impact on CPU and write speed. Like most things, it’s a trade-off and the results will vary by environment, so I recommend you do some testing before you apply compression to all tables. I tend to use compression mostly for my historical tables and partitions and leave my recent data uncompressed. And, back to the script, I use this stored procedure to estimate the impact of compression and to determine whether to use PAGE or ROW compression. PAGE is a higher level of compression, which means it’s also more expensive in terms of CPU, so if the difference between the two results is negligible, I’m more apt to just use ROW compression.

Now that my impromptu compression discussion is done, let’s get to the actual script. One final word of caution, however. This is an IO intensive process, so you may want to run it after peak business hours.

SET NOCOUNT ON;
 
DECLARE @printOnly  BIT = 0 -- change to 1 if you don't want to execute, just print commands
    , @tableName    VARCHAR(256)
    , @schemaName   VARCHAR(100)
    , @sqlStatement NVARCHAR(1000)
    , @tableCount   INT
    , @statusMsg    VARCHAR(1000);
 
IF EXISTS(SELECT * FROM tempdb.sys.tables WHERE name LIKE '%#tables%')
    DROP TABLE #tables; 
 
CREATE TABLE #tables
(
      database_name     sysname
    , schemaName        sysname NULL
    , tableName         sysname NULL
    , processed         bit
);
 
IF EXISTS(SELECT * FROM tempdb.sys.tables WHERE name LIKE '%#compression%')
    DROP TABLE #compressionResults;
 
IF NOT EXISTS(SELECT * FROM tempdb.sys.tables WHERE name LIKE '%#compression%')
BEGIN 
 
    CREATE TABLE #compressionResults
    (
          objectName                    varchar(100)
        , schemaName                    varchar(50)
        , index_id                      int
        , partition_number              int
        , size_current_compression      bigint
        , size_requested_compression    bigint
        , sample_current_compression    bigint
        , sample_requested_compression  bigint
    );
 
END;
 
INSERT INTO #tables
SELECT DB_NAME()
    , SCHEMA_NAME([schema_id])
    , name
    , 0 -- unprocessed
FROM sys.tables;
 
SELECT @tableCount = COUNT(*) FROM #tables;
 
WHILE EXISTS(SELECT * FROM #tables WHERE processed = 0)
BEGIN
 
    SELECT TOP 1 @tableName = tableName
        , @schemaName = schemaName
    FROM #tables WHERE processed = 0;
 
    SELECT @statusMsg = 'Working on ' + CAST(((@tableCount - COUNT(*)) + 1) AS VARCHAR(10)) 
        + ' of ' + CAST(@tableCount AS VARCHAR(10))
    FROM #tables
    WHERE processed = 0;
 
    RAISERROR(@statusMsg, 0, 42) WITH NOWAIT;
 
    SET @sqlStatement = 'EXECUTE sp_estimate_data_compression_savings ''' 
                        + @schemaName + ''', ''' + @tableName + ''', NULL, NULL, ''PAGE'';' -- ROW, PAGE, or NONE
 
    IF @printOnly = 1
    BEGIN 
 
        SELECT @sqlStatement;
 
    END
    ELSE
    BEGIN
 
        INSERT INTO #compressionResults
        EXECUTE sp_executesql @sqlStatement;
 
    END;
 
    UPDATE #tables
    SET processed = 1
    WHERE tableName = @tableName
        AND schemaName = @schemaName;
 
END;
 
SELECT * 
FROM #compressionResults;

String Parsing Function

This handy little script parses a string and returns the results as a table. I know there are a ton of string parsing functions out there, but I thought I’d add to the list. ;)

The basic logic of it (using a CTE) is derived from a forum post I found years ago. The table-valued UDF, delimiter, etc. is all stuff that I added. So thus, while I can not claim complete credit, I still thought it’d be worthwhile to share. Plus this will be used in another script I will be posting soon. :)

/* Let's create our parsing function... */
CREATE FUNCTION dbo.dba_parseString_udf
(
          @stringToParse VARCHAR(8000)  
        , @delimiter     CHAR(1)
)
RETURNS @parsedString TABLE (stringValue VARCHAR(128))
AS
/*********************************************************************************
    Name:       dba_parseString_udf
 
    Author:     Michelle Ufford, http://sqlfool.com
 
    Purpose:    This function parses string input using a variable delimiter.
 
    Notes:      Two common delimiter values are space (' ') and comma (',')
 
    Date        Initials    Description
    ----------------------------------------------------------------------------
    2011-05-20  MFU         Initial Release
*********************************************************************************
Usage: 		
    SELECT *
	FROM dba_parseString_udf(<string>, <delimiter>);
 
Test Cases:
 
    1.  multiple strings separated by space
        SELECT * FROM dbo.dba_parseString_udf('  aaa  bbb  ccc ', ' ');
 
    2.  multiple strings separated by comma
        SELECT * FROM dbo.dba_parseString_udf(',aaa,bbb,,,ccc,', ',');
*********************************************************************************/
BEGIN
 
    /* Declare variables */
    DECLARE @trimmedString  VARCHAR(8000);
 
    /* We need to trim our string input in case the user entered extra spaces */
    SET @trimmedString = LTRIM(RTRIM(@stringToParse));
 
    /* Let's create a recursive CTE to break down our string for us */
    WITH parseCTE (StartPos, EndPos)
    AS
    (
        SELECT 1 AS StartPos
            , CHARINDEX(@delimiter, @trimmedString + @delimiter) AS EndPos
        UNION ALL
        SELECT EndPos + 1 AS StartPos
            , CharIndex(@delimiter, @trimmedString + @delimiter , EndPos + 1) AS EndPos
        FROM parseCTE
        WHERE CHARINDEX(@delimiter, @trimmedString + @delimiter, EndPos + 1) <> 0
    )
 
    /* Let's take the results and stick it in a table */  
    INSERT INTO @parsedString
    SELECT SUBSTRING(@trimmedString, StartPos, EndPos - StartPos)
    FROM parseCTE
    WHERE LEN(LTRIM(RTRIM(SUBSTRING(@trimmedString, StartPos, EndPos - StartPos)))) > 0
    OPTION (MaxRecursion 8000);
 
    RETURN;   
END

11-Word Warning

Tom LaRock posted a new Meme Monday challenge: “Write a SQL blog post in 11 words or less.”

Donabel Santos tagged me, and I couldn’t resist the challenge. So here’s my entry:

Hasty coding, error prone. No backups, data loss. Company for sale.

This was inspired by the recent spate of stories I’ve heard about companies that have failed because they did not properly manage their data and databases.

I don’t know who’s been tagged or not, so I’m gagging some of my SQL Saturday Chicago friends:

Identity Columns: Are You Nearing The Limits?

I use identity columns frequently. After all, identity columns make great clustering keys. But it’s important when using identity columns to check on the amount of values you have left before you reach the limit of your data type. An identity column has a fixed amount of values it can use based upon whether you specified tinyint, smallint, int, or bigint when you defined the column. If you reach this limit, your inserts will blow up and cause a Chernobyl-like SQL meltdown will begin to fail. I just finished an audit of my tables and thought I’d share the script. I would like to warn that this script is *not* perfect… namely, it doesn’t handle negative integer values very elegantly. It also doesn’t know if you started your seed at zero, approached your max positive limit, then reseeded to the negative limit (see my “quick and dirty fix” tip at the end of this article).

/* Define how close we are to the value limit
   before we start throwing up the red flag.
   The higher the value, the closer to the limit. */
Declare @threshold decimal(3,2) = .85;
 
/* Create a temp table */
Create Table #identityStatus
(
      database_name     varchar(128)
    , table_name        varchar(128)
    , column_name       varchar(128)
    , data_type         varchar(128)
    , last_value        bigint
    , max_value         bigint
);
 
/* Use an undocumented command to run a SQL statement
   in each database on a server */
Execute sp_msforeachdb '
    Use [?];
    Insert Into #identityStatus
    Select ''?'' As [database_name]
        , Object_Name(id.object_id, DB_ID(''?'')) As [table_name]
        , id.name As [column_name]
        , t.name As [data_type]
        , Cast(id.last_value As bigint) As [last_value]
        , Case 
            When t.name = ''tinyint''   Then 255 
            When t.name = ''smallint''  Then 32767 
            When t.name = ''int''       Then 2147483647 
            When t.name = ''bigint''    Then 9223372036854775807
          End As [max_value]
    From sys.identity_columns As id
    Join sys.types As t
        On id.system_type_id = t.system_type_id
    Where id.last_value Is Not Null';
 
/* Retrieve our results and format it all prettily */
Select database_name
    , table_name
    , column_name
    , data_type
    , last_value
    , Case 
        When last_value < 0 Then 100
        Else (1 - Cast(last_value As float(4)) / max_value) * 100 
      End As [percentLeft]
    , Case 
        When Cast(last_value As float(4)) / max_value >= @threshold
            Then 'warning: approaching max limit'
        Else 'okay'
        End As [id_status]
From #identityStatus
Order By percentLeft;
 
/* Clean up after ourselves */
Drop Table #identityStatus;

If you find yourself quickly approaching your max limit and need to implement a quick and dirty fix, you can reseed your identity column. Of course, this only works if you started at zero instead of the actual lower, negative limit.

Effective Clustered Indexing

My first Simple-Talk article was published yesterday! I’m pretty excited about it and wanted to share the link. In the article, I give an overview of how clustered and nonclustered indexes work, and I demonstrate why clustered index best practices — narrow, unique, static, and ever-increasing — are important design considerations.

You can find the article on Simple-Talk’s website at:
http://www.simple-talk.com/sql/learn-sql-server/effective-clustered-indexes/

Please let me know your thoughts! :)

SQL Agent Job Script

This blog post is brought to you by the awesomeness that is the SQL Server Twitter community. Contributions were made by several awesome Tweeples, including Denny Cherry, Kendra Little, Ken Simmons, and Magnus Ahlkvist, among others.

What started this is something you’ve all probably run into at one time or another. We’re currently having some resource contention on our server when a ton of processes kickoff and overlap in the morning. Now, I have a script that I’ve used in the past for monitoring SQL agent jobs, but this time I wanted to add some historical run-time information. I know the sysjobhistory table contains the information I need, but it has some… let’s say, creative ways of storing the data. Opinions on the reasons why vary:



Regardless the reason, I needed to do some conversions. Denny shared with us the msdb.dbo.agent_datetime function to convert run_date and run_time into an actual datetime value. I have to say, this certainly cleans up the code quite a bit! Then Magnus shared a method to convert run_duration into seconds, which he modified from a script on SQLServerCentral. I was able to use these two tidbits to update my old script. You can now run this script to get back a list of all enabled procs, a script that will disable them, a breakdown of the schedule, and a historical run-time average.

/*  How many days do you want to include in your run-time average?
    Recent values tend to be more useful. */
Declare @daysToAverage smallint = 30;
 
Declare @avgRunTime Table
(
      job_id      uniqueidentifier
    , avgRunTime  int
);
 
/* We need to parse the schedule into something we can understand */
Declare @weekDay Table (
      mask       int
    , maskValue  varchar(32)
);
 
Insert Into @weekDay
Select 1, 'Sunday'  Union All
Select 2, 'Monday'  Union All
Select 4, 'Tuesday'  Union All
Select 8, 'Wednesday'  Union All
Select 16, 'Thursday'  Union All
Select 32, 'Friday'  Union All
Select 64, 'Saturday';
 
/* First, let's get our run-time average */
Insert Into @avgRunTime
Select job_id
    , Avg((run_duration/10000) * 3600 + (run_duration/100%100)*60 + run_duration%100) As 'avgRunTime' /* convert HHMMSS to seconds */
From msdb.dbo.sysjobhistory
Where step_id = 0 -- only grab our total run-time
    And run_status = 1 -- only grab successful executions
    And msdb.dbo.agent_datetime(run_date, run_time) >= DateAdd(day, -@daysToAverage, GetDate())
Group By job_id;
 
/* Now let's get our schedule information */
With myCTE
As(
    Select sched.name As 'scheduleName'
        , sched.schedule_id
        , jobsched.job_id
        , Case When sched.freq_type = 1 Then 'Once' 
            When sched.freq_type = 4 
                And sched.freq_interval = 1 
                    Then 'Daily'
            When sched.freq_type = 4 
                Then 'Every ' + Cast(sched.freq_interval As varchar(5)) + ' days'
            When sched.freq_type = 8 Then 
                Replace( Replace( Replace(( 
                    Select maskValue 
                    From @weekDay As x 
                    Where sched.freq_interval & x.mask <> 0 
                    Order By mask For XML Raw)
                , '"/><row maskValue="', ', '), '<row maskValue="', ''), '"/>', '') 
                + Case When sched.freq_recurrence_factor <> 0 
                        And sched.freq_recurrence_factor = 1 
                            Then '; weekly' 
                    When sched.freq_recurrence_factor <> 0 Then '; every ' 
                + Cast(sched.freq_recurrence_factor As varchar(10)) + ' weeks' End
            When sched.freq_type = 16 Then 'On day ' 
                + Cast(sched.freq_interval As varchar(10)) + ' of every '
                + Cast(sched.freq_recurrence_factor As varchar(10)) + ' months' 
            When sched.freq_type = 32 Then 
                Case When sched.freq_relative_interval = 1 Then 'First'
                    When sched.freq_relative_interval = 2 Then 'Second'
                    When sched.freq_relative_interval = 4 Then 'Third'
                    When sched.freq_relative_interval = 8 Then 'Fourth'
                    When sched.freq_relative_interval = 16 Then 'Last'
                End + 
                Case When sched.freq_interval = 1 Then ' Sunday'
                    When sched.freq_interval = 2 Then ' Monday'
                    When sched.freq_interval = 3 Then ' Tuesday'
                    When sched.freq_interval = 4 Then ' Wednesday'
                    When sched.freq_interval = 5 Then ' Thursday'
                    When sched.freq_interval = 6 Then ' Friday'
                    When sched.freq_interval = 7 Then ' Saturday'
                    When sched.freq_interval = 8 Then ' Day'
                    When sched.freq_interval = 9 Then ' Weekday'
                    When sched.freq_interval = 10 Then ' Weekend'
                End
                + Case When sched.freq_recurrence_factor <> 0 
                        And sched.freq_recurrence_factor = 1 Then '; monthly'
                    When sched.freq_recurrence_factor <> 0 Then '; every ' 
                + Cast(sched.freq_recurrence_factor As varchar(10)) + ' months' End
            When sched.freq_type = 64 Then 'StartUp'
            When sched.freq_type = 128 Then 'Idle'
          End As 'frequency'
        , IsNull('Every ' + Cast(sched.freq_subday_interval As varchar(10)) + 
            Case When sched.freq_subday_type = 2 Then ' seconds'
                When sched.freq_subday_type = 4 Then ' minutes'
                When sched.freq_subday_type = 8 Then ' hours'
            End, 'Once') As 'subFrequency'
        , Replicate('0', 6 - Len(sched.active_start_time)) 
            + Cast(sched.active_start_time As varchar(6)) As 'startTime'
        , Replicate('0', 6 - Len(sched.active_end_time)) 
            + Cast(sched.active_end_time As varchar(6)) As 'endTime'
        , Replicate('0', 6 - Len(jobsched.next_run_time)) 
            + Cast(jobsched.next_run_time As varchar(6)) As 'nextRunTime'
        , Cast(jobsched.next_run_date As char(8)) As 'nextRunDate'
    From msdb.dbo.sysschedules As sched
    Join msdb.dbo.sysjobschedules As jobsched
        On sched.schedule_id = jobsched.schedule_id
    Where sched.enabled = 1
)
 
/* Finally, let's look at our actual jobs and tie it all together */
Select job.name As 'jobName'
    , sched.scheduleName
    , sched.frequency
    , sched.subFrequency
    , SubString(sched.startTime, 1, 2) + ':' 
        + SubString(sched.startTime, 3, 2) + ' - ' 
        + SubString(sched.endTime, 1, 2) + ':' 
        + SubString(sched.endTime, 3, 2) 
        As 'scheduleTime' -- HH:MM
    , SubString(sched.nextRunDate, 1, 4) + '/' 
        + SubString(sched.nextRunDate, 5, 2) + '/' 
        + SubString(sched.nextRunDate, 7, 2) + ' ' 
        + SubString(sched.nextRunTime, 1, 2) + ':' 
        + SubString(sched.nextRunTime, 3, 2) As 'nextRunDate'
      /* Note: the sysjobschedules table refreshes every 20 min, 
        so nextRunDate may be out of date */
    , 'Execute msdb.dbo.sp_update_job @job_id = ''' 
        + Cast(job.job_id As char(36)) + ''', @enabled = 0;' As 'disableScript'
    , art.avgRunTime As 'avgRunTime_inSec' -- in seconds
    , (art.avgRunTime / 60) As 'avgRunTime_inMin' -- convert to minutes
From msdb.dbo.sysjobs As job
Join myCTE As sched
    On job.job_id = sched.job_id
Left Join @avgRunTime As art
    On job.job_id = art.job_id
Where job.enabled = 1 -- do not display disabled jobs
Order By nextRunDate;

If this doesn’t do exactly what you want, check out SQLJobVis, which Ken recommended. It’s a free tool that helps visualize the job history of SQL jobs.

Disposable Indexes

Today I had to run an ad hoc query on a 8.5 billion row table. The table had a dozen columns of a variety of data types and was clustered on a bigint identity. There were no other indexes on the table. My query involved a join to a smaller table with a date range restriction. Without an adequate index to use, SQL Server was going to be forced to scan this 8.5 billion row table. Now, I don’t have much patience for waiting for long running queries. I want to run the ad hoc, e-mail the results, and forget about it. But short of adding a nonclustered index, which would take a very long time to build and probably require additional space requisitioned from the SAN team, what could I do? Enter disposable indexes. Now, you might be asking yourself, “What the frilly heck does she mean by a disposable index? Is that new in Denali?” No, dear reader. I am actually referring to filtered indexes, which is available in SQL Server 2008 and 2008 R2. I call them “disposable” because I create them to significantly speed up ad hoc queries, then I drop them when I’m done.

Here, allow me to demonstrate using the AdventureWorks2008R2 database. Although the tables are smaller, this query is very similar in structure to what I needed to run today.

Select Count(Distinct sod.SalesOrderID) As 'distinctCount'
From AdventureWorks2008R2.Sales.SalesOrderDetail As sod
Join AdventureWorks2008R2.Production.Product As p
    On sod.ProductID = p.ProductID
Where sod.ModifiedDate Between '2008-01-01' And '2008-07-31'
    And p.MakeFlag = 0;

Now, let’s take a look at the type of indexes we currently have available:

Select name, has_filter, filter_definition
From sys.indexes
Where object_id = object_id('Sales.SalesOrderDetail');
name                                                    has_filter filter_definition
-----------------------------------------------------------------------------------------
PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID     0          NULL
AK_SalesOrderDetail_rowguid                             0          NULL
IX_SalesOrderDetail_ProductID                           0          NULL
 
(3 row(s) affected)

We need an index on ModifiedDate and ProductID, which it doesn’t look like we have currently. Without this, we’re going to end up scanning on the clustered index. That means SQL Server will have to evaluate each and every single row in the table to see if the row matches the criteria of our query. Not pretty, and certainly not fast. So instead, let’s create a filtered index on date. But we can greatly speed up the time it takes to create our filtered index by doing a little investigating upfront and finding a range of clustering key values that will cover the query. Doing this allows SQL Server to seek on the clustered index, greatly reducing the amount of reads necessary to create our filtered index. So let’s see this in action. First, let’s find out the current max value of the table:

Select Max(SalesOrderDetailID) As 'maxID'
From AdventureWorks2008R2.Sales.SalesOrderDetail;
maxID
-----------
121317

Now we get to do a little guessing. Let’s go back and see what date we get if we look at half of the records:

Select SalesOrderDetailID, ModifiedDate
From AdventureWorks2008R2.Sales.SalesOrderDetail
Where SalesOrderDetailID = (121317/2);
SalesOrderDetailID ModifiedDate
------------------ -----------------------
60658              2007-11-01 00:00:00.000

Okay, SalesOrderDetailID 60658 gets us back to 11/1/2007. That’s a little too far. Let’s see how a SalesOrderDetailID value of 75000 does…

Select SalesOrderDetailID, ModifiedDate
From AdventureWorks2008R2.Sales.SalesOrderDetail
Where SalesOrderDetailID = 75000;
SalesOrderDetailID ModifiedDate
------------------ -----------------------
75000              2007-12-27 00:00:00.000

Okay, SalesOrderDetailID 75000 takes us back to 12/27/2007. That’s close enough to 1/1/2008 for my purposes. Of course, depending on the size of the table, in real life it may make sense to try to get closer to the value you’re looking for. But for now, this will do. And because we’re looking for data through the “current date” (7/31/2008 in the AdventureWorks2008R2 database), we already know our outer limit is 121317.

Now let’s take these ranges and create a filtered index that will cover our query:

Create Nonclustered Index IX_SalesOrderDetail_filtered
    On Sales.SalesOrderDetail(ModifiedDate, ProductID)
    Include (SalesOrderID)
    Where SalesOrderDetailID >= 75000
      And SalesOrderDetailID <  121317;

By having this range identified, SQL Server can perform a seek on the clustered index to create the nonclustered index on just the subset of records that you need for your query. Remember that 8.5 billion row table I mentioned earlier? I was able to create a filtered index that covered my query in 10 seconds. Yes, that’s right… 10 SECONDS.

The last thing we need to do is include our filtered index definition in our ad hoc query to ensure that the filtered index is used. It also doesn’t hurt to explicitly tell SQL Server to use the filtered index if you’re absolutely sure it’s the best index for the job.

Select Count(Distinct sod.SalesOrderID) As 'distinctCount'
From AdventureWorks2008R2.Sales.SalesOrderDetail As sod With (Index(IX_SalesOrderDetail_filtered))
Join AdventureWorks2008R2.Production.Product As p
    On sod.ProductID = p.ProductID
Where sod.ModifiedDate Between '2008-01-01' And '2008-07-31'
    And p.MakeFlag = 0
    And sod.SalesOrderDetailID >= 75000
    And sod.SalesOrderDetailID <  121317;

That’s all there is to it. Using this method, I was able to complete my ad hoc request in 40 seconds: 10 seconds to create the filtered index and 30 seconds to actually execute the ad hoc. Of course, it also took a couple of minutes to write the query, look at existing indexes, and search for the correct identity values. All in all, from the time I received the request to the time I send the e-mail was about 5 minutes. All because of disposable filtered indexes. How’s that for some SQL #awesomesauce? :)

Yet Another PASS Summit Recap & Thoughts on PDW

The SQL blogosphere has been lit up with PASS Summit recaps.

I debated about whether or not to write my own post, until I remembered that this blog serves as a mini-journal for me too. I have a notoriously poor memory–my husband likes to say that my CPU and memory are good, but I must have an unusual clustering strategy–so maybe this blog post will be a good pointer for me when I start prepping for next year’s Summit. ;)

This was definitely the best PASS Summit conference ever. While there will always be opportunities to do things better–improvement is a never-ending process–it was clear that the organizers of this event listened to the feedback they had received the previous year. One of the best changes? Backpacks. These were very useful, as evidenced by their presence everywhere. Nice job, organizers!

My absolute favorite thing about Summit is the chance to meet and reconnect with so many amazing SQL folks. There were entirely too many people to list out, but some highlights include meeting Crys Manson, Jorge Segarra, and Karen Lopez for the first time. I also had a chance encounter with Ola Hallengren in the Sheraton elevator. Apparently we were only staying a few rooms apart this year. We ended up having a couple of really great discussions about index fragmentation, the differences between our scripts, and things we’d like to see changed in future releases of SQL Server.

I had the opportunity to sit on the panel at the WIT luncheon. All of the women on the panel were amazing, and I was honored just to be sitting at the same table as them. I was especially pleased to meet Nora Denzel, a Senior Vice President at Intuit. Intelligent, confident, and witty, she is a great role model for young technical women, myself included. I can only hope that some of her gumption rubbed off on me due to our close proximity. :) After the event, I was pleasantly surprised by how many folks–men and women both–came up to me to tell me how much they enjoyed it. Thanks to the WIT VC for organizing another great event!

The lightning talk sessions were a new feature this year, and I think I like it. The format of the lightning session is 7 speakers presenting on a topic for 5 quick minutes. Watching these sessions is kind of like skipping right to the center of a tootsie pop: all content and no fluff. The standout lightning talk presentation for me was Adam Machanic’s. It was beautifully rehearsed and choreographed. Nice job, Adam!

Another of the many highlights of the week was meeting the Microsoft execs. In addition to meeting Ted Kummert, Mark Souza, and Donald Farmer–all very nice gentlemen–I had the opportunity to speak at length with Jose Blakely about Parallel Data Warehouse (PDW). PDW, formerly codenamed Madison, was officially launched at Summit. Jose was kind enough to explain the PDW architecture, both where it came from and the vision for where it’s going. I’d attempt to regurgitate it here, but I think the probability of me misquoting would be high.

Suffice it to say, this technology has me excited. Why? Quite frankly, I think PDW will do for data warehousing what SQL Server did for databases, and what Analysis Services did for BI: make it affordable. With a compelling cost-per-terabyte, an attractive scale-out approach, and an entry point at under $1 million, we’ll see more small-to-midsized companies implementing data warehousing and business intelligence. This is good news for those of us looking for an affordable data warehouse solution and for those of us who make our living with SQL Server. And for those of you who might suggest that few companies need a datawarehouse that can support multi-terabyte data, I’d like to point out that just 3 or 4 years ago, 100 GB was considered a lot of data.

I spent most of my week digging into the PDW architecture. It’s not all roses–it’s a first release and, as such, is immature compared to the much older and more established data warehouse systems–but again, it has a lot going for it, not least of all it’s easy integration within a SQL Server environment and the relatively low cost. We’re currently investigating this as a possible data warehouse solution for our business intelligence environment, so expect to see more from me about PDW as I learn more about it.