Overhead in Non-Unique Clustered Indexes

I’ve received a couple of questions regarding my article, Performance Considerations of Data Types, and the overhead associated with non-unique clustered indexes. I started to respond via e-mail, but my response was so long I decided to turn it into a blog post instead. ;)

I should start by clarifying that non-unique clustered indexes do not necessarily consume more space and overhead; it depends on the data stored. If you have duplicate clustered key values, the first instance of the value will be handled as though it were unique. Any subsequent values, however, will incur overhead to manage the uniquifier that SQL Server adds to maintain row uniqueness. This same overhead is also incurred in non-clustered indexes, too, adding to the overall expense of this approach.

I think it helps to actually look at the data, so let’s walk through a few different common scenarios. We’ll create a table with a unique clustered index, a table with a non-unique clustered index but no duplicates, and a table with duplicate key values.

Also, a little warning that I started to write this in SQL Server 2008, and since I’m on a 2008 kick, I decided to leave it that way. You can modify this pretty easily to work in 2005, if necessary.

Use sandbox;
/* Unique, clustered index, no duplicate values */
Create Table dbo.uniqueClustered
      myDate    date        Not Null
    , myNumber  int         Not Null
    , myColumn  char(995)   Not Null
Create Unique Clustered Index CIX_uniqueClustered
    On dbo.uniqueClustered(myDate);
/* Non-unique clustered index, but no duplicate values */
Create Table dbo.nonUniqueNoDups
      myDate    date        Not Null
    , myNumber  int         Not Null
    , myColumn  char(995)   Not Null
Create Clustered Index CIX_nonUniqueNoDups
    On dbo.nonUniqueNoDups(myDate);
/* Non-unique clustered index, duplicate values */
Create Table dbo.nonUniqueDuplicates
      myDate    date        Not Null
    , myNumber  int         Not Null
    , myColumn  char(995)   Not Null
Create Clustered Index CIX_nonUniqueDuplicates
    On dbo.nonUniqueDuplicates(myDate);

I’m going to use the date data type in 2008 for my clustered index key. To ensure uniqueness for the first two tables, I’ll iterate through a few years’ worth of dates. This is typical of what you may see in a data mart, where you’d have one record with an aggregation of each day’s data. For the table with duplicate values, I’m going to insert the same date for each row.

/* Populate some test data */
Set NoCount On;
Declare @myDate date = '1990-01-01'
    , @myNumber int = 1;
While @myDate < '2010-01-01'
    Insert Into dbo.uniqueClustered
    Select @myDate, @myNumber, 'data';
    Insert Into dbo.nonUniqueNoDups
    Select @myDate, @myNumber, 'data';
    Insert Into dbo.nonUniqueDuplicates
    Select '2009-01-01', @myNumber, 'data';
    Select @myDate = DateAdd(day, 1, @myDate)
        , @myNumber += 1;

After running the above script, each table should have 7,305 records. This is obviously pretty small for a table, but it’ll serve our purposes. Now let’s take a look at the size of our tables:

/* Look at the details of our indexes */
/* Unique, clustered index, no duplicate values */
Select 'unique' As 'type', page_count, avg_page_space_used_in_percent, record_count
    , min_record_size_in_bytes, max_record_size_in_bytes
From sys.dm_db_index_physical_stats(DB_ID(), Object_ID(N'uniqueClustered'), Null, Null, N'Detailed') 
Where index_level = 0
Union All
/* Non-unique clustered index, but no duplicate values */
Select 'non-unique, no dups', page_count, avg_page_space_used_in_percent, record_count
    , min_record_size_in_bytes, max_record_size_in_bytes
From sys.dm_db_index_physical_stats(DB_ID(), Object_ID(N'nonUniqueNoDups'), Null, Null, N'Detailed') 
Where index_level = 0
Union All
/* Non-unique clustered index, duplicate values */
Select 'duplicates', page_count, avg_page_space_used_in_percent, record_count
    , min_record_size_in_bytes, max_record_size_in_bytes
From sys.dm_db_index_physical_stats(DB_ID(), Object_ID(N'nonUniqueDuplicates'), Null, Null, N'Detailed') 
Where index_level = 0;

Here’s the results:

type                page_count           avg_page_space_used_in_percent record_count         min_record_size_in_bytes max_record_size_in_bytes
------------------- -------------------- ------------------------------ -------------------- ------------------------ ------------------------
unique              914                  99.8055102545095               7305                 1009                     1009
non-unique, no dups 914                  99.8055102545095               7305                 1009                     1009
duplicates          1044                 88.066036570299                7305                 1009                     1017

I want to point out a couple of things. First, there is no difference in the number of pages between the non-unique clustered index with no duplicates ([nonUniqueNoDups]) and the unique clustered index ([uniqueClustered]). The table with duplicate clustered key values, however, requires 14% more pages to store the same amount of data. Secondly, the [max_record_size_in_bytes] of the [nonUniqueDuplicates] table is 8 bytes more than that of the other two. We’ll discuss why in a minute.

Now let’s take a look at the actual data pages. For this, I’m going to use my page internals proc.

EXECUTE dbo.dba_viewPageData_sp
      @databaseName = 'sandbox'
    , @tableName    = 'sandbox.dbo.uniqueClustered'
    , @indexName    = 'CIX_uniqueClustered';

I’m not going to post the entire results here, but I want to draw your attention to “m_slotCnt = 8″, which is near the top of the page. That means 8 records are stored on this page. Also, when you look near the end of the first record (Slot 0), you should see the following results:

Slot 0 Offset 0x60 Length 1009
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 1009
Memory Dump @0x00A9C060
00000000:   1000ee03 c3150b01 00000064 61746120 †..î.Ã......data 
000003F0:   00†††††††††††††††††††††††††††††††††††.  
Slot 0 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 1990-01-01                  
Slot 0 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 1                         
Slot 0 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data

Now let’s look at the table that has a non-unique clustered index but no duplicates:

Execute dbo.dba_viewPageData_sp
      @databaseName = 'sandbox'
    , @tableName    = 'sandbox.dbo.nonUniqueNoDups'
    , @indexName    = 'CIX_nonUniqueNoDups';

The m_slotCnt count is also 8 for this page. This time, let’s glance at the first and second records (Slot 0 and 1 respectively):

Slot 0 Offset 0x60 Length 1009
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 1009
Memory Dump @0x62FDC060
00000000:   1000ee03 c3150b01 00000064 61746120 †..î.Ã......data    
000003F0:   00†††††††††††††††††††††††††††††††††††.                        
Slot 0 Column 0 Offset 0x0 Length 4 Length (physical) 0
UNIQUIFIER = 0                       
Slot 0 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 1990-01-01                  
Slot 0 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 1                         
Slot 0 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data
Slot 1 Offset 0x451 Length 1009
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 1009
Memory Dump @0x62FDC451
00000000:   1000ee03 c4150b02 00000064 61746120 †..î.Ä......data          
000003F0:   00†††††††††††††††††††††††††††††††††††.                        
Slot 1 Column 0 Offset 0x0 Length 4 Length (physical) 0
UNIQUIFIER = 0                       
Slot 1 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 1990-01-02                  
Slot 1 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 2                         
Slot 1 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data

We now see a new addition to the row, “UNIQUIFIER = 0.” This is SQL Server’s way of managing row uniqueness internally. You’ll notice that, because the clustered key values are unique, the UNIQUIFIER is set to 0 and the row size is still 1009; for all intents and purposes, the UNIQUIFIER is not consuming any space.

Update: The DBCC God himself, Paul Randal, explained that non-dupes actually have a NULL UNIQUIFIER, which DBCC PAGE displays as a 0. Thanks for explaining, Paul! I wondered about that but chalked it up to SQL voodoo.

Now let’s look at our final case, a non-unique clustered index with duplicate key values:

Execute dbo.dba_viewPageData_sp
      @databaseName = 'sandbox'
    , @tableName    = 'sandbox.dbo.nonUniqueDuplicates'
    , @indexName    = 'CIX_nonUniqueDuplicates';

Here’s where things get interesting. The m_slotCnt value is now 7, which means we’re now storing 1 record less per page. Let’s look at the details:

Slot 0 Offset 0x60 Length 1009
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 1009
Memory Dump @0x00A9C060
00000000:   1000ee03 df300b01 00000064 61746120 †..î.ß0.....data  
000003F0:   00†††††††††††††††††††††††††††††††††††.                        
Slot 0 Column 0 Offset 0x0 Length 4 Length (physical) 0
UNIQUIFIER = 0                       
Slot 0 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 2009-01-01                  
Slot 0 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 1                         
Slot 0 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data
Slot 1 Offset 0x451 Length 1017
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 1017                   
Memory Dump @0x00A9C451
00000000:   3000ee03 df300b02 00000064 61746120 †0.î.ß0.....data  
000003F0:   000100f9 03010000 00†††††††††††††††††...ù.....                
Slot 1 Column 0 Offset 0x3f5 Length 4 Length (physical) 4
UNIQUIFIER = 1                       
Slot 1 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 2009-01-01                  
Slot 1 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 2                         
Slot 1 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data

The first record, Slot 0, looks exactly the same as in the previous table; the UNIQUIFIER is 0 and the row size is 1009. The second record (Slot 1), however, now has a UNIQUIFIER value of 1 and the row size is 1017. If you notice, the “Record Attributes” of Slot 1 are also different, with the addition of “VARIABLE_COLUMNS.” This is because the UNIQUIFIER is stored as a variable column. The extra 8 bytes of overhead break down to 4 bytes to store the UNIQUIFIER, 2 bytes to store the variable column offset, and 2 bytes to store the variable count. The tables we created used all fixed-length columns; you may notice some minor overhead differences if your table already contains variable columns.

To summarize, there is indeed a difference in the page structure between a unique clustered index and a non-unique clustered index; however, there’s only a possible performance and space impact when storing duplicate clustered key values. So there you go, more detail than you ever wanted to know about clustered indexes and uniqueness!

Performance Considerations of Data Types

I’ve just finished my first real content for the PASS Performance SIG. I decided to write on “Performance Considerations of Data Types,” as I think this is one of the easiest and most overlooked topics in performance tuning. Here’s a summary:

Selecting inappropriate data types, especially on large tables with millions or billions of rows, can have significant performance implications. In this article, I’ll explain why and offer suggestions on how to select the most appropriate data type for your needs. The primary focus will be on common data types in SQL Server 2005 and 2008, but I’ll also discuss some aspects of clustered indexes and column properties. Most importantly, I’ll show some examples of common data-type misuse.

If you’re interested in this content, you can find it here: Performance Considerations of Data Types.

Special thanks to Paul Randal and Paul Nielsen for providing me with technical reviews and great feedback. You guys are awesome!

Thanks also to Mladen Prajdic and Jeremiah Peschka for their great input. You guys are awesome, too!

A Look at Missing Indexes

Tim Ford (@SQLAgentMan) recently blogged about his Top 5 SQL Server Indexing Best Practices. I thought it was a good list, and it inspired this blog post. I’ve recently been doing a little index spring cleaning, and I thought some people may be interested in the process I go through. So, here it is… a journey through madness an overview of my general missing index process.

I start with my trusty dba_missingIndexStoredProc table. If this table sounds completely foreign to you, check out my post, Find Missing Indexes In Stored Procs. Basically, I have a process that runs every night, scanning the XML of every query plan on the server to find procs that are possibly missing indexes. I then log the details for later action.

So I take a look at my table, and I find 8 stored procedures that are possibly missing indexes. Clicking on the XML link will show me the logged query plan:

Procs With Missing Indexes

Procs With Missing Indexes

Right clicking on the “Missing Index” description will give me the details of the recommended index:

Missing Index Details

Missing Index Details

Here’s an example of what SQL Server will return for you:

Missing Index Details from ExecutionPlan2.sqlplan
The Query Processor estimates that implementing the following index 
could improve the query cost by 85.7327%.
USE [msdb]
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[sysjobhistory] ([job_id])
INCLUDE ([instance_id],[step_id],[sql_message_id],[sql_severity],

I now compare the details of this proposed index to the missing index DMV suggestions, using this query:

SELECT t.name AS 'affected_table'
    , 'Create NonClustered Index IX_' + t.name + '_missing_' 
        + CAST(ddmid.index_handle AS VARCHAR(10))
        + ' On ' + ddmid.STATEMENT 
        + ' (' + IsNull(ddmid.equality_columns,'') 
        + CASE WHEN ddmid.equality_columns IS Not Null 
            And ddmid.inequality_columns IS Not Null THEN ',' 
                ELSE '' END 
        + IsNull(ddmid.inequality_columns, '')
        + ')' 
        + IsNull(' Include (' + ddmid.included_columns + ');', ';'
        ) AS sql_statement
    , ddmigs.user_seeks
    , ddmigs.user_scans
    , CAST((ddmigs.user_seeks + ddmigs.user_scans) 
        * ddmigs.avg_user_impact AS INT) AS 'est_impact'
    , ddmigs.last_user_seek
FROM sys.dm_db_missing_index_groups AS ddmig
INNER JOIN sys.dm_db_missing_index_group_stats AS ddmigs
    ON ddmigs.group_handle = ddmig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS ddmid 
    ON ddmig.index_handle = ddmid.index_handle
INNER JOIN sys.tables AS t
    ON ddmid.OBJECT_ID = t.OBJECT_ID
WHERE ddmid.database_id = DB_ID()
    --AND t.name = 'myTableName' 
ORDER BY CAST((ddmigs.user_seeks + ddmigs.user_scans) 
    * ddmigs.avg_user_impact AS INT) DESC;

I usually find the data in both places, but not always. One reason why is because the missing index DMV will only store data since your last reboot. So if I’m taking a look at this DMV on Monday and I just rebooted on Sunday, I may not have enough history to give me meaningful recommendations. This is just something to be aware of.

What I’m looking for in this DMV is the number of user_seeks and the est_impact. Also, if I haven’t rebooted my server in a while, I take a look at last_user_seek so I can determine whether or not it’s still accurate.

Next, I take a look at my existing indexes using Kimberly Tripp’s sp_helpindex2 system stored proc. I use her proc instead of sp_helpindex because I need to see included columns.

If you’re wondering why I’m looking at existing indexes, the reason is because I’m looking for indexes that can be modified slightly to accommodate my missing index needs. By “modified slightly,” I mean that I’d only want to make a change to an existing index if it did not drastically change the size or composition of an index, i.e. adding one or two narrow columns as included columns. I do NOT mean making changes that double the size of your index; in those cases, you’d probably be better off creating a brand new index.

Looking at existing indexes is actually a pretty critical part of the puzzle. If I have a proc that only gets called a few times an hour and could benefit from a better index, I may not create that index if it means adding a wide, expensive index to a busy table. If I can make a small modification to an existing index, then there’s a greater chance I’ll make the change and cover my query.

At this point, I should have enough information to start making decisions. I was going to write out the path I normally take when making decisions, but I thought, “Hey! What a great time for a diagram.” So here you go:

Decision Path

Decision Path

Disclaimer: I’m *not* a Visio wizard, so if I butchered the use of certain symbols in my diagram, please let me know so I can a) fix it, and b) learn from it!

It’s hard to really put all of the decision paths into a single, small diagram like this. There’s a lot of variables that I’m not even touching here. But I think this is a fairly good “generic” representation of the path I take. When I hit an “end” process, it means I don’t create the missing index at this time. Maybe in the future, it’ll become necessary, but I prefer to err on the side of less indexes.

So there you have it, a brief look at my missing index process. Hopefully someone finds it helpful. :)

T-SQL Bitwise Operations

I’ve seen bit-product columns from time-to-time, mostly in SQL Server 2000 system tables, but it’s never been something I’ve had to work with. And when I’ve needed to, I’ve known how to figure out which options are selected, i.e. a bit product of 9 means options 8 and 1 are selected. If you’ve ever taken a look at the [status] column on the sysdatabases table (SQL 2000), you’ll know what I’m talking about.

What I’ve never known how to do, until recently, was calculate these options programmatically. That’s why, when I noticed the [freq_interval] on the sysschedules table was a bit-product column, I decided to spend a little time figuring it out. Fortunately for me, a couple of my awesome co-workers, Jeff M. and Jason H., have worked with this before and were able to explain it to me. And, it turns out, it’s actually quite easy.

Let me back up a few steps in case you’re not familiar with this topic. If you check out the Books Online entry for the sysschedules table (2005), you’ll notice the following statement:

freq_interval is one or more of the following:
1 = Sunday
2 = Monday
4 = Tuesday
8 = Wednesday
16 = Thursday
32 = Friday
64 = Saturday

When I looked at the actual value in the table, the schedule has a [freq_interval] value of 42, which is the sum of the bit values for the days selected.

If there were more than 7 options, the bit values would continue to double, i.e. 128, 256, etc. And regardless of how many bit values you select, you’re guaranteed one and only one possible answer, as the sum of all previous bit values will never exceed the next bit value:
1 + 2 = 3
1 + 2 + 4 = 7
1 + 2 + 4 + 8 = 15

Knowing this, I’m able to retrieve the values manually: I start with the highest bit value that does not exceed 42, then subtract it; I repeat until I’m left with 0.

42 – 32 = 10
10 – 8 = 2
2 – 2 = 0

That means my job is scheduled to run on Friday’s (32), Wednesday’s (8), and Monday’s (2).

Now how do I do this with T-SQL? SQL Server provides an operator specifically for this task: the bitwise AND operator (&). For now, I’m going to skip the “why” behind this and just get to the practical application. If you’re interested in the “why,” let me know and I’ll write a follow-up post on binary and logical AND and OR operations.

For example, to use the bitwise AND to find out which days are selected…

Select 42 & 1 As 'Sunday'
    , 42 & 2 As 'Monday'
    , 42 & 4 As 'Tuesday'
    , 42 & 8 As 'Wednesday'
    , 42 & 16 As 'Thursday'
    , 42 & 32 As 'Friday'
    , 42 & 64 As 'Saturday';

… will return …

Sunday      Monday      Tuesday     Wednesday   Thursday    Friday      Saturday
----------- ----------- ----------- ----------- ----------- ----------- -----------
0           2           0           8           0           32          0

If the result is not equal to zero, then that day is selected. Easy as key lime pie, right?

Now let’s take it a step further and create our own working example. Let’s say we’re going to track the characteristics of various objects in a single bit-product column (note: this is not necessarily the best way to accomplish this in the real world, but it’s a good illustration). First, set up a table to use in our example. This table will have a column, [attributes], which will hold the sum of our bit values.

Create Table myTable
      id            int identity(1,1)
    , item          varchar(10)
    , attributes    int
Insert Into myTable
Select 'Broccoli', 200 Union All
Select 'Tomato', 193 Union All
Select 'Car', 276 Union All
Select 'Ball', 292;

Next, we’re going to create a table variable that holds characteristics and their values. We’ll then join these two tables together to see which attributes exist for each item.

Declare @statusLookup Table
      attribute int
    , value     varchar(10)
Insert Into @statusLookup
Select 1, 'Red' Union All
Select 4, 'Blue' Union All
Select 8, 'Green' Union All
Select 16, 'Metal' Union All
Select 32, 'Plastic' Union All
Select 64, 'Plant' Union All
Select 128, 'Edible' Union All
Select 256, 'Non-Edible';
Select a.item, b.value
From myTable a
Cross Join @statusLookup b
Where a.attributes & b.attribute <> 0
Order By a.item
    , b.value

You should get this result:

item       value
---------- ----------
Ball       Blue
Ball       Non-Edible
Ball       Plastic
Broccoli   Edible
Broccoli   Green
Broccoli   Plant
Car        Blue
Car        Metal
Car        Non-Edible
Tomato     Edible
Tomato     Plant
Tomato     Red

Great, now we know broccoli is edible! Let’s apply a little XML to clean up the results…

Select a.item
    , Replace( Replace( Replace(( 
        Select value 
        From @statusLookup AS b 
        Where a.attributes & b.attribute <> 0 
        Order By b.value For XML Raw)
        , '"/><row value="', ', '), '<row value="', ''), '"/>', '') 
        As 'attributes'
From myTable a
Order By a.item;
item       attributes
Ball       Blue, Non-Edible, Plastic
Broccoli   Edible, Green, Plant
Car        Blue, Metal, Non-Edible
Tomato     Edible, Plant, Red

Voila! There you have it, how to use the bitwise AND (&) operator to retrieve multiple values from a bit-product column. Pretty neat stuff!

Special thanks to Jeff M. and Jason H. for their assistance. :)

Happy Coding!

Michelle Ufford (aka SQLFool)

Source: http://sqlfool.com/2009/02/bitwise-operations/

Ramblings on Super Bowl and PASS

Super Bowl 2009

As many of you know, I’m a DBA at GoDaddy.com, which had 2 commercials in this year’s Super Bowl. If you saw the commercials during the game or went to our website for the “internet only” versions, let me know; I have no control over the content of the ads, but I’m still interested in your opinions. But comments on ad content aside, the commercials continue to prove very effective for driving traffic to our website and, in turn, generating income. (Don’t believe me? Read this and this article on finance.yahoo.com).

We typically get some pretty large spikes the minutes immediately following a commercial airing, and this year was no exception! We spent quite a bit of time throughout the year tuning our systems to support Super Bowl traffic, especially in the few weeks preceding the big game. By all accounts, this year’s efforts have paid off; our database servers exceeded expectations. I don’t think I’m allowed to go into specifics, but I can mention some server stats. During the spikes, my primary server reached 27k transactions per second, no timeouts, and very good response times. In fact, I estimate we decreased our recovery time by around 80% compared to last year.

Why do I mention all of this? Well, there’s the bragging aspect, of course :) . But more importantly, I bring it up to give credence to some of the performance tuning articles I’ve written in the past, like:

Keep in mind, there’s rarely a “magic bullet” for performance tuning, and what worked for me may not work for you. If you have any questions, please feel free to leave me a comment or send me an e-mail, and I’ll do my best to respond.

If you’re interested in more information on effective performance tuning, make sure to check out the Performance Tuning Section on SQLServerPedia.com.


I’ve been pleasantly surprised with the number of inquiries I’ve received regarding the I380 PASS Chapter (serving the East Iowa area of Cedar Rapids and Iowa City), so I’ll continue to post updates to my blog.

As I’ve mentioned before, we’re now officially a PASS Chapter, and we’re currently in the planning stages of our first meeting. We have one confirmed key sponsor, Quest Software (woot!), and we’re speaking with a couple of other possible sponsors. Side note: if you’re interested in sponsoring our group, I’d love to hear from you! E-mail me at michelle @ sqlfool dot com.

We’re currently planning to have meetings on the second Tuesday of every month, with our first meeting on Tuesday, March 10th 2009. We have a confirmed speaker but not a confirmed topic, and we’re actively working on a meeting location. Please keep in mind that all of these details are subject to change. ;)

If you’re in the area and would like to attend, or know someone who should attend, please drop me a line!

Creating a 60 GB Index

Recently, I needed to create an index on a 1.5 billion row table. I’ve created some large indexes before, but this was the largest, so I thought I’d share my experience in case anyone was interested.

The plan was to create the following index:

Create NonClustered Index IX_indexName_unpartitioned
    On databaseName.dbo.tableName(columnList)
    Include (includedColumnList)
    With (MaxDop = 1, Online = On, Sort_In_TempDB = On)
    On [Primary];

This is an unpartitioned, non-clustered index being built on a partitioned table. Normally, when you build an aligned partitioned index, the index creation process requires less memory and has less noticeable impact on the system, because each partition is handled individually, one at a time. But as this is an unpartitioned (unaligned) index, each partition was built concurrently, requiring more memory and causing a greater impact on performance. Because of this, I needed to restrict the process to MaxDop 1; otherwise, the server would suffer because of too much memory pressure.

I chose Sort_In_TempDB = On because:

  • I’m building this index online on a very busy table and cannot afford to impact normal oeprations. By using Sort_In_TempDB = On, index transactions are separated from user transactions, allowing the user transaction log to be truncated. *
  • TempDB is on a different volume and therefore should reduce the duration of the operation.
  • The recovery for the user database is full, and the recovery for the TempDB is simple. Sorting in TempDB would minimize logging.

* Note: the transaction log for the user database still grew at a much faster rate than normal and had to be closely monitored during this operation to ensure enough free space remained.

The size of the indexed columns is 25 bytes. So I ran my calculations and came up with 36gb space requirement. We increased TempDB to 50gb and gave it a go. An hour later… ERROR. The process terminated because there would not be enough space free in TempDB to complete the operation. Obviously, my calculations were incorrect. After speaking with Chris Leonard, a man who is way too smart for his own good, I realized I had not included my clustered index in the size calculations. Doh.

Re-running my estimates, here’s what I came up with:

Index Size 25   bytes
Clustered Index Size 16   bytes
Records per Page 197
Est. Rows 1,575,000,000
Est. Number of Pages 7,995,000
Space Requirements 59   GB


Obviously, 50gb of free space just wasn’t going to cut it. I decided to give TempDB a little wiggle room and bumped up the space in TempDB to 70gb (not as easy as you’d imagine, I had to requisition more space on the SAN), then re-ran the operation. Success! The process completed in 3 hours and 24 minutes. There was a mild increase in CPU, but no applications or users experienced any issues.

For those interested in the particulars: this was used for a single-record look-up and could not be filtered by the partitioning key. The non-partitioned version of this index has 80% less reads and 11% less CPU than its partitioned counterpart.

If you’re interested in learning more about indexes, here’s some recommended reading:

Index Clean-Up Scripts

I’ve been spending a lot of time lately looking at indexing in my production environments… dropping un-used ones, adding missing ones, and fine-tuning the ones I already have. I thought I’d share some of the scripts I’ve been using to accomplish this.

Here’s the script I use to find any un-used indexes. This relies heavily on the sys.dm_db_index_usage_stats DMV (2005+). This query will also return the SQL statements needed to drop the indexes for convenience. This does NOT mean you should necessarily drop the index. This is only a guide and a starting point; only you know how your application is used and whether SQL Server’s recommendations make sense.

Un-Used Indexes Script

Declare @dbid int
    , @dbName varchar(100);
Select @dbid = DB_ID()
    , @dbName = DB_Name();
With partitionCTE (object_id, index_id, row_count, partition_count) 
    Select [object_id]
        , index_id
        , Sum([rows]) As 'row_count'
        , Count(partition_id) As 'partition_count'
    From sys.partitions
    Group By [object_id]
        , index_id
Select Object_Name(i.[object_id]) as objectName
        , i.name
        , Case 
            When i.is_unique = 1 
                Then 'UNIQUE ' 
            Else '' 
          End + i.type_desc As 'indexType'
        , ddius.user_seeks
        , ddius.user_scans
        , ddius.user_lookups
        , ddius.user_updates
        , cte.row_count
        , Case When partition_count > 1 Then 'yes' 
            Else 'no' End As 'partitioned?'
        , Case 
            When i.type = 2 And i.is_unique_constraint = 0
                Then 'Drop Index ' + i.name 
                    + ' On ' + @dbName 
                    + '.dbo.' + Object_Name(ddius.[object_id]) + ';'
            When i.type = 2 And i.is_unique_constraint = 1
                Then 'Alter Table ' + @dbName 
                    + '.dbo.' + Object_Name(ddius.[object_ID]) 
                    + ' Drop Constraint ' + i.name + ';'
            Else '' 
          End As 'SQL_DropStatement'
From sys.indexes As i
Inner Join sys.dm_db_index_usage_stats ddius
    On i.object_id = ddius.object_id
        And i.index_id = ddius.index_id
Inner Join partitionCTE As cte
    On i.object_id = cte.object_id
        And i.index_id = cte.index_id
Where ddius.database_id = @dbid
Order By 
    (ddius.user_seeks + ddius.user_scans + ddius.user_lookups) Asc
    , user_updates Desc;

This next script relies on several DMV’s (2005+) that identify missing indexes. While this is good information, the index recommendations do not always make sense and/or sometimes overlap. Also, these DMV’s store data since the SQL Server was last restarted, so if it’s been a while since your server was rebooted, this data may be out of date. This script also provides a SQL statement, in case you do decide to create the index, but it doesn’t take into consideration advanced parameters (i.e. sort_in_tempDB, Online, MaxDop, etc.) and only provides a basic create statement. Nonetheless, it’s another good starting point.

Missing Index Script

Select t.name As 'affected_table'
    , 'Create NonClustered Index IX_' + t.name + '_missing_' 
        + Cast(ddmid.index_handle As varchar(10))
        + ' On ' + ddmid.statement 
        + ' (' + IsNull(ddmid.equality_columns,'') 
        + Case When ddmid.equality_columns Is Not Null 
            And ddmid.inequality_columns Is Not Null Then ',' 
                Else '' End 
        + IsNull(ddmid.inequality_columns, '')
        + ')' 
        + IsNull(' Include (' + ddmid.included_columns + ');', ';'
        ) As sql_statement
    , ddmigs.user_seeks
    , ddmigs.user_scans
    , Cast((ddmigs.user_seeks + ddmigs.user_scans) 
        * ddmigs.avg_user_impact As int) As 'est_impact'
    , ddmigs.last_user_seek
From sys.dm_db_missing_index_groups As ddmig
Inner Join sys.dm_db_missing_index_group_stats As ddmigs
    On ddmigs.group_handle = ddmig.index_group_handle
Inner Join sys.dm_db_missing_index_details As ddmid 
    On ddmig.index_handle = ddmid.index_handle
Inner Join sys.tables As t
    On ddmid.object_id = t.object_id
Where ddmid.database_id = DB_ID()
    And Cast((ddmigs.user_seeks + ddmigs.user_scans) 
        * ddmigs.avg_user_impact As int) > 100
Order By Cast((ddmigs.user_seeks + ddmigs.user_scans) 
    * ddmigs.avg_user_impact As int) Desc;

Indexing for Partitioned Tables

So you’ve partitioned your table, and now you’re ready to performance tune. As with any table, indexing is a great place to start. And if you’re like most people new to partitioning, you probably created all of your indexes on the partitioned scheme, either by design or unintentionally.

Let’s take a step back here and discuss what a partitioned index is. A partitioned index, like a partitioned table, separates data into different physical structures (partitions) under one logical name. Specifically, each partition in a nonclustered index contains its own B-tree structure with a subset of rows, based upon the partitioning scheme. By default, an unpartitioned nonclustered index has just one partition.

Keep in mind, when you create an index on a partitioned table, i.e.

Create NonClustered Index IX_myIndex
    On dbo.myTable(myColumn);

… you are creating the index on the partitioned scheme by default. In order to create a NON-partitioned index on that same table, you would need to explicitly declare “On [FileGroup]“, i.e.

Create NonClustered Index IX_myIndex
    On dbo.myTable(myColumn)
    On [Primary];


But should you partition your index? That depends on how you use it. In fact, most environments will probably want to use a mix of partitioned and non-partitioned indexes. I’ve found that that partitioned indexes perform better when aggregating data or scanning partitions. Conversely, you’ll probably find that, if you need to locate a single, specific record, nothing performs better than a non-partitioned index on that column.

Let’s walk through some examples and see how they perform. I’ll bring back my trusty ol’ orders table for this.

/* Create a partition function. */
Create Partition Function 
    [test_monthlyDateRange_pf] (datetime)
    As Range Right For Values
    ('2009-01-01', '2009-01-08', '2009-01-15'
    , '2009-01-22', '2009-01-29');
/* Associate the partition function with a partition scheme. */
Create Partition Scheme test_monthlyDateRange_ps
    As Partition test_monthlyDateRange_pf
    All To ([Primary]);
/* Create a partitioned table. */
Create Table dbo.orders
      order_id  int Identity(1,1)   Not Null
    , orderDate datetime            Not Null
    , orderData smalldatetime       Not Null
    Constraint PK_orders Primary Key Clustered
      , orderDate
) On test_monthlyDateRange_ps(orderDate);
/* Create some records to play with. */
Set NoCount On;
Declare @endDate datetime = '2009-01-01';
While @endDate < '2009-02-01'
    Insert Into dbo.orders
    Select @endDate, @endDate;
    Set @endDate = DATEADD(minute, 1, @endDate);
Set NoCount Off;
/* Let’s create an aligned, partitioned index. */
Create NonClustered Index IX_orders_aligned
    On dbo.orders(order_id)
    On test_monthlyDateRange_ps(orderDate); 
    /* you don't actually need to declare the last
       line of this unless you want to create the
       index on a different partitioning scheme.   */
/* Now let’s create an unpartitioned index. */
Create NonClustered Index IX_orders_unpartitioned
    On dbo.orders(order_id)
    On [Primary];


Now that we have both a partitioned and an unpartitioned index, let’s take a look at our sys.partitions table:

/* Let's take a look at our index partitions */
Select i.name
    , i.index_id
    , p.partition_number
    , p.rows
From sys.partitions As p
Join sys.indexes As i
    On p.object_id = i.object_id 
   And p.index_id = i.index_id
Where p.object_id = object_id('orders')
Order By i.index_id, p.partition_number;




As expected, both of our partitioned indexes, PK_orders and IX_orders_aligned, have 6 partitions, with a subset of rows on each partition. Our unpartitioned non-clustered index, IX_orders_unpartitioned, on the other hand has just 1 partition containing all of the rows.

Now that we have our environment set up, let’s run through some different queries and see the performance impact of each type of index.

/* Query 1, specific record look-up, covered */
Select order_id, orderDate
From dbo.orders With (Index(IX_orders_aligned))
Where order_id = 25000;
Select order_id, orderDate
From dbo.orders With (Index(IX_orders_unpartitioned))
Where order_id = 25000;


Query 1

Query 1

The unpartitioned index performs significantly better when given a specific record to look-up. Now let’s try the same query, but utilizing a scan instead of a seek:

/* Query 2, specific record look-up, uncovered */
Select order_id, orderDate, orderData
From dbo.orders With (Index(IX_orders_aligned))
Where order_id = 30000;
Select order_id, orderDate, orderData
From dbo.orders With (Index(IX_orders_unpartitioned))
Where order_id = 30000;


Query 2

Query 2

Again we see that the non-partitioned index performs better with the single-record look-up. This can lead to some pretty dramatic performance implications. So when *would* we want to use a partitioned index? Two instances immediately pop to mind. First, partition switching can only be performed when all indexes on a table are aligned. Secondly, partitioned indexes perform better when manipulating large data sets. To see this in action, let’s try some simple aggregation…

/* Query 3, aggregation */
Select Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime) As 'order_date'
    , Count(*)
From dbo.orders With (Index(IX_orders_aligned))
Where orderDate Between '2009-01-01' And '2009-01-07 23:59'
Group By Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime) 
Order By Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime);
Select Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime) As 'order_date'
    , Count(*)
From dbo.orders With (Index(IX_orders_unpartitioned))
Where orderDate Between '2009-01-01' And '2009-01-07 23:59'
Group By Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime) 
Order By Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime);


Query 3

Query 3

As you can see, partitioned indexes perform better when aggregating data. This is just a simple example, but the results can be even more dramatic in a large production environment. This is one of the reasons why partitioned tables and indexes are especially beneficial in data warehouses.

So now you have a general idea of what a partitioned index is and when to use a partitioned index vs a non-partitioned index. Ultimately, your indexing needs will depend largely on the application and how the data is used. When in doubt, test, test, test! So to recap…

  • Specify “On [FileGroup]“ to create an unpartitioned index on a partitioned table
  • Consider using non-partitioned indexes for single-record look-ups
  • Use partitioned indexes for multiple records and data aggregations
  • To enable partition switching, all indexes on the table must be aligned.

For more information on partitioning, check out my other partitioning articles:

Partitioning Example
Partitioning 101
Tips for Large Data Stores

Bulk Inserts with XML

Last week, I blogged about how to perform one-to-many inserts with table-valued parameters, a feature new in 2008. For those who do not yet have 2008 or will not have it in the near future, it may still be beneficial to use XML for bulk inserts.

Here’s a pretty simple example of how to accomplish this:

/* Create some tables to work with */
CREATE TABLE dbo.orders
      order_id      INT IDENTITY(1,1)   Not Null
    , orderDate     DATETIME            Not Null
    , customer_id   INT                 Not Null
    CONSTRAINT PK_orders
        PRIMARY KEY CLUSTERED(order_id)
CREATE TABLE dbo.orderDetails
      orderDetail_id    INT IDENTITY(1,1)   Not Null
    , order_id          INT                 Not Null
    , lineItem          INT                 Not Null
    , product_id        INT                 Not Null
    CONSTRAINT PK_orderDetails
        PRIMARY KEY CLUSTERED(orderDetail_id)
    CONSTRAINT FK_orderDetails_orderID
        FOREIGN KEY(order_id)
        REFERENCES dbo.orders(order_id)
/* Create a new procedure using an XML parameter */
CREATE PROCEDURE dbo.insert_orderXML_sp
      @orderDate        DATETIME
    , @customer_id      INT
    , @orderDetailsXML  XML
    DECLARE @myOrderID INT;
    INSERT INTO dbo.orders
        , customer_id    
        , @customer_id
    SET @myOrderID = SCOPE_IDENTITY();
    INSERT INTO dbo.orderDetails
        , lineItem
        , product_id
    SELECT @myOrderID
         , myXML.value('./@lineItem', 'int')
         , myXML.value('./@product_id', 'int')
    FROM @orderDetailsXML.nodes('/orderDetail') As nodes(myXML);
/* Call our stored procedure */
EXECUTE dbo.insert_orderXML_sp
      @orderDate = '2008-01-01'
    , @customer_id = 101
    , @orderDetailsXML = 
        '<orderDetail lineItem="1" product_id="123" />
         <orderDetail lineItem="2" product_id="456" />
         <orderDetail lineItem="3" product_id="789" />
         <orderDetail lineItem="4" product_id="246" />
         <orderDetail lineItem="5" product_id="135" />';
/* Check our data */
SELECT * FROM dbo.orders;
SELECT * FROM dbo.orderDetails;
/* Clean up our mess */
DROP PROCEDURE insert_orderXML_sp;
DROP TABLE dbo.orderDetails;
DROP TABLE dbo.orders;

I’ve found that this is more efficient when performing large parent/child inserts, i.e. 1 parent record to 100 child records. Keep in mind that there’s a point where doing an insert with XML is more expensive than using a traditional INSERT stored procedure. I haven’t run any tests yet to help define what that threshold is… more to come on this in the near future.

More on the Nodes() Method can be found here in Books Online: http://msdn.microsoft.com/en-us/library/ms188282(SQL.90).aspx

Update: I’ve just learned that the “value” keyword is case-sensitive. Apparently my code box plug-in was defaulting “value” to “VALUE.” :)

Here’s the error message you’ll get if you don’t have “value” in lower-case:
Cannot find either column “myXML” or the user-defined function or aggregate “myXML.VALUE”, or the name is ambiguous.

Performance Comparison of Singleton, XML, and TVP Inserts

As promised, today I took a look at the performance of bulk inserts using XML and Table-Valued Parameters. I also compared it against singleton inserts to show the value in the bulk-insert approach.

My tests were pretty simple: insert 100 records using each method. Each test was executed 10 times to ensure consistency. The duration was recorded in microseconds.

The goal was to compare the performance of the inserts. Because I was executing this entire test within SQL Server, I had to isolate only the actual insert transactions and ignore everything else, such as the loading of the data; that work would normally be performed by the calling application.

So without further ado… screenshots of the Profiler traces: (click to enlarge)


Single Insert Method


XML Method


Table-Valued Parameter Method


Method Avg CPU Avg Reads Avg Writes Avg Duration (micro)
Singleton Method 3 202 0 13378
XML Method 0 222 0 3124
TVP Method 1 207 0 780


As expected, both the XML and the TVP method performed significantly better than the single-insert method. As hoped, the table-valued parameter arguably performed the best of all 3.