#PASSAwesomeness

Allen Kinsel on Twitter (@sqlinsaneo) recently started a new Twitter tag, #PASSAwesomeness, about all of the cool things about PASS Summit. I really like the tag, so I’m going to blatantly steal borrow it for this post. :)

First, and long overdue, I want to give a brief recap of the East Iowa SQL Saturday. On October 17th, our local PASS chapter, 380PASS, sponsored our first ever SQL Saturday at the University of Iowa in Iowa City. By all accounts, the event was a great success! We had 90 attendees, 11 speakers, and 21 sessions. We received numerous compliments on the quality of the speakers, the niceness of the facilities, and the abundance of food. Not too shabby for our first time hosting the event, if I do say so myself. :)

I’d like to thank all of our wonderful speakers, especially those who traveled from out of town and out of state, for making this event such a success. I’d also like to thank our amazing volunteers for helping put this all together. Lastly, but certainly not least, I’d like to thank our generous sponsors, without whom this event would not be possible. Because this event went so smoothly and was so well received in the community, we’ve already started planning our next big SQL event! In the meantime, don’t forget to check out our monthly 380PASS meetings to tide you over.

I’d also like to take a moment to discuss the PASS Summit. Unless you’re a DBA who’s been living under a rock, you’ve probably heard of the PASS Summit. If you *have* been living under a rock — and hey, I’m not poking fun, I used to live under a rock, too! — then what you need to know is that the Summit is the largest SQL Server conference in the world. It’s a gathering of Microsoft developers and SQL Server gurus; the rest of us show up to try to absorb as much from them as possible. Since I’ve recently moved to the Business Intelligence team, I’m extremely excited to delve into the amazing amount of BI content offered.

I’m also deeply honored to be presenting at the Summit this year on some of the performance tuning techniques I’ve used with great success in my production environments. The session is titled, Super Bowl, Super Load – A Look At Performance Tuning for VLDB’s. If you’re interested in performance tuning or VLDB (very large database) topics, consider stopping by to catch my session. From what I can tell, I’ll be presenting on Tuesday from 10:15am – 11:30am in room(s?) 602-604.

If you read my blog, or if we’ve ever interacted in any way on the internet — Twitter, LinkedIn, e-mails, blog comments, etc. — please stop by and say “hi”! Aside from all of the awesome SQL Server content, I’m really looking forward to meeting as many new folks as possible.

And on that note…

Getting to meet all of the amazing SQL Server professionals out there who have inspired and encouraged me in so many ways #PASSAwesomeness

Partitioning Tricks

For those of you who are using partitioning, or who are considering using partitioning, allow me to share some tips with you.

Easy Partition Staging Tables

Switching partitions (or more specifically, hobts) in and out of a partitioned table requires the use of a staging table. The staging table has very specific requirements: it must be completely identical to the partitioned table, including indexing structures, and it must have a check constraint that limits data to the partitioning range. Thanks to my co-worker Jeff, I’ve recently started using the SQL Server Partition Management tool on CodePlex. I haven’t used the automatic partition switching feature — frankly, using any sort of data modification tool in a production environment makes me nervous — but I’ve been using the scripting option to create staging tables in my development environment, which I then copy to production for use. It’s nothing you can’t do yourself, but it does make the whole process easy and painless, plus it saves you from annoying typos. But be careful when using this tool to just create the table and check constraints automatically, because you may need to…

Add Check Constraints After Loading Data

Most of the time, I add the check constraint when I create the staging table, then I load data and perform the partition switch. However, for some reason, I was receiving the following error:

.Net SqlClient Data Provider: Msg 4972, Level 16, State 1, Line 1
ALTER TABLE SWITCH statement failed. Check constraints or partition function of source table ‘myStagingTable’ allows values that are not allowed by check constraints or partition function on target table ‘myDestinationTable’.

This drove me crazy. I confirmed my check constraints were correct, that I had the correct partition number, and that all schema and indexes matched identically. After about 30 minutes of this, I decided to drop and recreate the constraint. For some reason, it fixed the issue. Repeat tests produced the same results: the check constraint needed to be added *after* data was loaded. This error is occurring on a SQL Server 2008 SP1 box; to be honest, I’m not sure what’s causing the error, so if you know, please leave me a comment. But I figured I’d share so that anyone else running into this issue can hopefully save some time and headache. :)

Replicating Into Partitioned and Non-Partitioned Tables

Recently, we needed to replicate a non-partitioned table to two different destinations. We wanted to use partitioning for Server A, which has 2008 Enterprise; Server B, which is on 2005 Standard, could not take advantage of partitioning. The solution was really easy: create a pre-snapshot and post-snapshot script for the publication, then modify to handle each server group differently. Using pseudo-code, it looked something like this:

/* Identify which servers get the partitioned version */
If @@ServerName In ('yourServerNameList') 
Begin
 
    /* Create your partitioning scheme if necessary */
    If Not Exists(Select * From sys.partition_schemes Where name = 'InsertPartitionScheme')
        CREATE PARTITION SCHEME InsertPartitionScheme 
            AS PARTITION InsertPartitionFunction ALL TO ([PRIMARY]);    
 
    /* Create your partitioning function if necessary */
    If Not Exists(Select * From sys.partition_functions Where name = 'InsertPartitionFunction')
        CREATE PARTITION FUNCTION InsertPartitionFunction (smalldatetime) 
            AS RANGE RIGHT FOR VALUES ('insertValues');    
 
    /* Create a partitioned version of your table */
    CREATE TABLE [dbo].[yourTableName] (
	    [yourTableSchema]
    ) ON InsertPartitionScheme([partitioningKey]);
 
End
Else
Begin
 
    /* Create a non-partitioned version of your table */
    CREATE TABLE [dbo].[yourTableName] (
	    [yourTableSchema]
    ) ON [Primary];
 
End

You could also use an edition check instead of a server name check, if you prefer. The post-snapshot script basically looked the same, except you create partitioned indexes instead.

Compress Old Partitions

Did you know you can set different compression levels for individual partitions? It’s true! I’ve just completed doing this on our largest partitioned table. Here’s how:

/* Apply compression to your partitioned table */
Alter Table dbo.yourTableName
Rebuild Partition = All
With 
(
      Data_Compression = Page On Partitions(1 to 9)
    , Data_Compression = Row  On Partitions(10 to 11) 
    , Data_Compression = None On Partitions(12)
);
 
/* Apply compression to your partitioned index */
Alter Index YourPartitionedIndex
    On dbo.yourTableName
    Rebuild Partition = All
    With 
    (
      Data_Compression = Page On Partitions(1 to 9)
    , Data_Compression = Row  On Partitions(10 to 11) 
    , Data_Compression = None On Partitions(12)
    );
 
/* Apply compression to your unpartitioned index */
Alter Index YourUnpartitionedIndex
    On dbo.yourTableName
    Rebuild With (Data_Compression = Row);

A couple of things to note. In all of our proof-of-concept testing, we found that compression significantly reduced query execution time, reads (IO), and storage. However, CPU was also increased significantly. The results were more dramatic, both good and bad, with page compression versus row compression. Still, for our older partitions, which aren’t queried regularly, it made sense to turn on page compression. The newer partitions receive row compression, and the newest partitions, which are still queried very regularly by routine processes, were left completely uncompressed. This seems to strike a nice balance in our environment, but of course, results will vary depending on how you use your data.

Something to be aware of is that compressing your clustered index does *not* compress your non-clustered indexes; those are separate operations. Lastly, for those who are curious, it took us about 1 minute to apply row compression and about 7 minutes to apply page compression to partitions averaging 30 million rows.

Looking for more information on table partitioning? Check out my overview of partitioning, my example code, and my article on indexing on partitioned tables.

Why I’m Blogging Less

I’ve received a few questions asking why I’ve been blogging less frequently, and even one inquiry after my health. Rest assured, I’m completely fine. But there are 2 perfectly good reasons why I’ve been blogging less these days.

East Iowa SQL Saturday:

I’m the event organizer for East Iowa SQL Saturday, which is eating up a lot of my free time. If you haven’t yet heard about our SQL Saturday event, let me give you a brief overview. It’s a FREE, one-day training event geared toward SQL Server professionals and anyone who wants to learn more about SQL Server. We have 22 sessions planned covering a variety of topics, from Business Intelligence to Disaster Recovery to SQL Server 2008 topics. And if you’re a .NET developer, we also have some .NET-related presentations, including PowerShell and MVC.

We’re very fortunate to have snagged an excellent set of speakers. Jessica Moss, Louis Davidson, Timothy Ford, Jason Strate, and Alex Kuznetsov are just a few of the great speakers we have lined up.

There’s only a handful of spots left, so if you’re interested in attending, you should register soon. To find out more details about the speakers and sessions, or to register, be sure to check out our website at http://sqlsaturday.380pass.org.

The Other Reason:

baby_uff

Yes, that’s right, I’m with child. Expecting. Eating for two. Bun in the oven. In the family way. You get the idea.

So when I’m not at work, planning SQL Saturday, or playing Civilization Revolution, I’m sleeping. For those who remotely care, I’m due around Super Bowl time in February 2010.

2010: The Year I Make Contact

2010: The Year I Make Contact

Rest assured, this blog isn’t going away. And hopefully once I get through SQL Saturday and then PASS Summit, I’ll have more free time again. :)