User Groups Are Like Guilds…

(channeling “Forest Gump”)… you never know what you’re gonna get. Ack! Sorry, I just can’t help myself sometimes. Moving on…

I was recently discussing guilds with my gamer husband and he commented how much running a user group sounds like running a guild. For those of you who aren’t already aware, before I traded my gaming addiction for a SQL one, he and I ran a guild together with around 140 members.

The Guild
Not sure what a guild is? Check out the The Guild, a popular, very humorous, and only slightly exaggerated web series.

The more I’ve thought about what my husband said, the more I realized how right he is. Allow me to share my (questionable) thoughts on the subject:

Guild Masters

Some guild masters are great leaders and others are just very dedicated; the same is true with user group leaders. If you spend all day trolling forums and working on maxing your DPS, you’re probably a good gamer but it doesn’t necessarily make you a good guild master. Similarly, being a SQL samurai does not necessarily prepare you to lead a user group. A handful of people are just naturally good leaders; most everyone else has to acquire the skill, often through painful experience. Before starting a guild or user group, you should ask yourself the following questions:

  • Are you in the market for an unpaid part-time job?
  • Do you like to alphabetize your DVD collection?
  • Do you enjoy helping n00bs (junior admins)?
  • Have you ever led an anti-social, semi-violent mob before?

If you answered “no” to any of those questions, don’t worry, it doesn’t necessarily mean you should not start your own guild or user group. But you may want to consider first joining an existing group to ensure you’ll enjoy the experience. Either way, you’ll quickly find out how important it is to have…

Officers

You can’t do it alone. Well, you can try, but don’t expect the 40-man heroic raid you scheduled to start on-time or run smoothly. Every guild and group leader needs supportive and dedicated officers. If it’s your first time leading a group, try to recruit someone who has leadership experience but perhaps doesn’t have the time or energy to be El Jefe; this person can be an invaluable resource for you. And if you have run a group before, you still want officers to help distribute the workload. Officers can help with a variety of tasks, from managing supplies to organizing major events. Lastly, they’re also a great point of contact for your…

Guildies

Guildies (members) are the fine men and women who have entrusted you to lead them into battle (provide stimulating meetings). Without them, you would have no guild (user group). They have joined for any number of reasons: some are new to the game and want to learn (junior DBAs), some are interested in meeting new people with similar interests, and others are just there for the free food.

Over time, you’ll find membership waxes and wanes; people switch servers (move to a new city), trade in gaming for a more boring hobby (switch from DBA to sysadmin), or just run out of time in the day. There’s little you can do to change this, so you’ll inevitably have to do some new-member recruitment. However, if you’ve got a good group, you’ll find much of your advertisement is by the word-of-mouth of current members. Still want to recruit new members? Try throwing some big…

Events

Whether it’s an end-of-game dungeon or a SQL Saturday, everyone loves a good event. There’s a couple of things you should be aware of, though. First, always, ALWAYS plan for people to not show. Don’t take it personally; life just happens. I’ve heard that 70% of registrants is a good estimate of how many people will actually show up.

Secondly, while everyone loves to attend events, not many people want to actually help organize one. If you’re lucky enough to get volunteers, treat them very well! You’ll quickly find out a good volunteer is worth his or her weight in…

Loot

Just as dungeon bosses drop loot (prizes) both good and bad, so do sponsors. Very few guildies are motivated solely by loot, and loot is not absolutely necessary for a successful event. Still, everyone likes to win, and there’s really nothing like the joy of rolling a perfect 100 to score that epic dagger (erm… I guess the best translation for this one is having your ticket drawn to win a copy of Quest’s Capacity Manager).

PvP

One of the most popular event types is a PvP (player vs. player) raid. This is where your guildies attack members of opposing factions, just for fun and bragging rights. To help make user groups even more guild-like, I’m currently organizing raids against the local Oracle and mySQL user groups. We hope to use the element of surprise to really lay into ‘em. I’ll let you know how it turns out.

All jokes aside, guilds and user groups may not be _exactly_ the same, but there are certainly a surprising amount of similarities. If nothing else, both definitely involve a lot of time, effort, and dedication, and I think many of the leadership and organizational skills learned in a guild are truly transferable to the “real world.”

Hopefully by now you’re either feeling motivated to start a user group, or you’re off to the store to stock up on Cheetos and Mt. Dew, the sustenance of choice for most gamers, so you can survive the weekend locked in the basement playing PC games. Whatever the case… have fun!

Webcast Tomorrow!

I’m excited to be doing a webcast tomorrow with the infamous illustrious Brent Ozar for Quest’s Pain-of-the-Week. The title is “Getting Started with SQL Server Management Studio,” and as you’ve probably gathered, it’s pretty entry-level stuff. If you read my blog, then chances are you don’t need to watch this webcast. But if you know anyone who’s trying to learn SQL Server or is trying to make the upgrade from 2000 to 2005/2008, this may be a good webcast for them.

I’ve also got a few other speaking engagements coming up:

June 2nd: Cedar Valley .NET User Group
I’ll be reprising my Iowa Code Camp presentation on “SQL Server for the .NET Developer” for CVINETA. This presentation focuses on what you need to know about good table design, indexing strategies, and fragmentation… you know, what you wish every .NET developer knew about SQL Server. :)

June 11th: PoTW: Time-Saving SQL Server Management Studio Tips & Tricks
I’ll also be doing this webcast with @BrentO as a follow-up to our webcast tomorrow. It will focus on how to save time and improve your sanity by using some neat little tricks in SSMS 2008.

Overhead in Non-Unique Clustered Indexes

I’ve received a couple of questions regarding my article, Performance Considerations of Data Types, and the overhead associated with non-unique clustered indexes. I started to respond via e-mail, but my response was so long I decided to turn it into a blog post instead. ;)

I should start by clarifying that non-unique clustered indexes do not necessarily consume more space and overhead; it depends on the data stored. If you have duplicate clustered key values, the first instance of the value will be handled as though it were unique. Any subsequent values, however, will incur overhead to manage the uniquifier that SQL Server adds to maintain row uniqueness. This same overhead is also incurred in non-clustered indexes, too, adding to the overall expense of this approach.

I think it helps to actually look at the data, so let’s walk through a few different common scenarios. We’ll create a table with a unique clustered index, a table with a non-unique clustered index but no duplicates, and a table with duplicate key values.

Also, a little warning that I started to write this in SQL Server 2008, and since I’m on a 2008 kick, I decided to leave it that way. You can modify this pretty easily to work in 2005, if necessary.

Use sandbox;
Go
 
/* Unique, clustered index, no duplicate values */
Create Table dbo.uniqueClustered
(
      myDate    date        Not Null
    , myNumber  int         Not Null
    , myColumn  char(995)   Not Null
);
 
Create Unique Clustered Index CIX_uniqueClustered
    On dbo.uniqueClustered(myDate);
 
 
/* Non-unique clustered index, but no duplicate values */
Create Table dbo.nonUniqueNoDups
(
      myDate    date        Not Null
    , myNumber  int         Not Null
    , myColumn  char(995)   Not Null
);
 
Create Clustered Index CIX_nonUniqueNoDups
    On dbo.nonUniqueNoDups(myDate);
 
 
/* Non-unique clustered index, duplicate values */
Create Table dbo.nonUniqueDuplicates
(
      myDate    date        Not Null
    , myNumber  int         Not Null
    , myColumn  char(995)   Not Null
);
 
Create Clustered Index CIX_nonUniqueDuplicates
    On dbo.nonUniqueDuplicates(myDate);

I’m going to use the date data type in 2008 for my clustered index key. To ensure uniqueness for the first two tables, I’ll iterate through a few years’ worth of dates. This is typical of what you may see in a data mart, where you’d have one record with an aggregation of each day’s data. For the table with duplicate values, I’m going to insert the same date for each row.

/* Populate some test data */
Set NoCount On;
Declare @myDate date = '1990-01-01'
    , @myNumber int = 1;
 
While @myDate < '2010-01-01'
Begin
 
    Insert Into dbo.uniqueClustered
    Select @myDate, @myNumber, 'data';
 
    Insert Into dbo.nonUniqueNoDups
    Select @myDate, @myNumber, 'data';
 
    Insert Into dbo.nonUniqueDuplicates
    Select '2009-01-01', @myNumber, 'data';
 
    Select @myDate = DateAdd(day, 1, @myDate)
        , @myNumber += 1;
 
End;

After running the above script, each table should have 7,305 records. This is obviously pretty small for a table, but it’ll serve our purposes. Now let’s take a look at the size of our tables:

/* Look at the details of our indexes */
 
/* Unique, clustered index, no duplicate values */
Select 'unique' As 'type', page_count, avg_page_space_used_in_percent, record_count
    , min_record_size_in_bytes, max_record_size_in_bytes
From sys.dm_db_index_physical_stats(DB_ID(), Object_ID(N'uniqueClustered'), Null, Null, N'Detailed') 
Where index_level = 0
Union All
/* Non-unique clustered index, but no duplicate values */
Select 'non-unique, no dups', page_count, avg_page_space_used_in_percent, record_count
    , min_record_size_in_bytes, max_record_size_in_bytes
From sys.dm_db_index_physical_stats(DB_ID(), Object_ID(N'nonUniqueNoDups'), Null, Null, N'Detailed') 
Where index_level = 0
Union All
/* Non-unique clustered index, duplicate values */
Select 'duplicates', page_count, avg_page_space_used_in_percent, record_count
    , min_record_size_in_bytes, max_record_size_in_bytes
From sys.dm_db_index_physical_stats(DB_ID(), Object_ID(N'nonUniqueDuplicates'), Null, Null, N'Detailed') 
Where index_level = 0;

Here’s the results:

type                page_count           avg_page_space_used_in_percent record_count         min_record_size_in_bytes max_record_size_in_bytes
------------------- -------------------- ------------------------------ -------------------- ------------------------ ------------------------
unique              914                  99.8055102545095               7305                 1009                     1009
non-unique, no dups 914                  99.8055102545095               7305                 1009                     1009
duplicates          1044                 88.066036570299                7305                 1009                     1017

I want to point out a couple of things. First, there is no difference in the number of pages between the non-unique clustered index with no duplicates ([nonUniqueNoDups]) and the unique clustered index ([uniqueClustered]). The table with duplicate clustered key values, however, requires 14% more pages to store the same amount of data. Secondly, the [max_record_size_in_bytes] of the [nonUniqueDuplicates] table is 8 bytes more than that of the other two. We’ll discuss why in a minute.

Now let’s take a look at the actual data pages. For this, I’m going to use my page internals proc.

EXECUTE dbo.dba_viewPageData_sp
      @databaseName = 'sandbox'
    , @tableName    = 'sandbox.dbo.uniqueClustered'
    , @indexName    = 'CIX_uniqueClustered';

I’m not going to post the entire results here, but I want to draw your attention to “m_slotCnt = 8″, which is near the top of the page. That means 8 records are stored on this page. Also, when you look near the end of the first record (Slot 0), you should see the following results:

Slot 0 Offset 0x60 Length 1009
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 1009
Memory Dump @0x00A9C060
00000000:   1000ee03 c3150b01 00000064 61746120 †..î.Ã......data 
[...]
000003F0:   00†††††††††††††††††††††††††††††††††††.  
 
Slot 0 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 1990-01-01                  
 
Slot 0 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 1                         
 
Slot 0 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data

Now let’s look at the table that has a non-unique clustered index but no duplicates:

Execute dbo.dba_viewPageData_sp
      @databaseName = 'sandbox'
    , @tableName    = 'sandbox.dbo.nonUniqueNoDups'
    , @indexName    = 'CIX_nonUniqueNoDups';

The m_slotCnt count is also 8 for this page. This time, let’s glance at the first and second records (Slot 0 and 1 respectively):

Slot 0 Offset 0x60 Length 1009
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 1009
Memory Dump @0x62FDC060
00000000:   1000ee03 c3150b01 00000064 61746120 †..î.Ã......data    
[...]
000003F0:   00†††††††††††††††††††††††††††††††††††.                        
 
Slot 0 Column 0 Offset 0x0 Length 4 Length (physical) 0
UNIQUIFIER = 0                       
 
Slot 0 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 1990-01-01                  
 
Slot 0 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 1                         
 
Slot 0 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data
 
 
Slot 1 Offset 0x451 Length 1009
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 1009
Memory Dump @0x62FDC451
 
00000000:   1000ee03 c4150b02 00000064 61746120 †..î.Ä......data          
[...]
000003F0:   00†††††††††††††††††††††††††††††††††††.                        
 
Slot 1 Column 0 Offset 0x0 Length 4 Length (physical) 0
UNIQUIFIER = 0                       
 
Slot 1 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 1990-01-02                  
 
Slot 1 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 2                         
 
Slot 1 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data

We now see a new addition to the row, “UNIQUIFIER = 0.” This is SQL Server’s way of managing row uniqueness internally. You’ll notice that, because the clustered key values are unique, the UNIQUIFIER is set to 0 and the row size is still 1009; for all intents and purposes, the UNIQUIFIER is not consuming any space.

Update: The DBCC God himself, Paul Randal, explained that non-dupes actually have a NULL UNIQUIFIER, which DBCC PAGE displays as a 0. Thanks for explaining, Paul! I wondered about that but chalked it up to SQL voodoo.

Now let’s look at our final case, a non-unique clustered index with duplicate key values:

Execute dbo.dba_viewPageData_sp
      @databaseName = 'sandbox'
    , @tableName    = 'sandbox.dbo.nonUniqueDuplicates'
    , @indexName    = 'CIX_nonUniqueDuplicates';

Here’s where things get interesting. The m_slotCnt value is now 7, which means we’re now storing 1 record less per page. Let’s look at the details:

Slot 0 Offset 0x60 Length 1009
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP     Record Size = 1009
Memory Dump @0x00A9C060
00000000:   1000ee03 df300b01 00000064 61746120 †..î.ß0.....data  
[...]
000003F0:   00†††††††††††††††††††††††††††††††††††.                        
 
Slot 0 Column 0 Offset 0x0 Length 4 Length (physical) 0
UNIQUIFIER = 0                       
 
Slot 0 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 2009-01-01                  
 
Slot 0 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 1                         
 
Slot 0 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data
 
 
Slot 1 Offset 0x451 Length 1017
 
Record Type = PRIMARY_RECORD         Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS
Record Size = 1017                   
Memory Dump @0x00A9C451
00000000:   3000ee03 df300b02 00000064 61746120 †0.î.ß0.....data  
[...]
000003F0:   000100f9 03010000 00†††††††††††††††††...ù.....                
 
Slot 1 Column 0 Offset 0x3f5 Length 4 Length (physical) 4
UNIQUIFIER = 1                       
 
Slot 1 Column 1 Offset 0x4 Length 3 Length (physical) 3
myDate = 2009-01-01                  
 
Slot 1 Column 2 Offset 0x7 Length 4 Length (physical) 4
myNumber = 2                         
 
Slot 1 Column 3 Offset 0xb Length 995 Length (physical) 995
myColumn = data

The first record, Slot 0, looks exactly the same as in the previous table; the UNIQUIFIER is 0 and the row size is 1009. The second record (Slot 1), however, now has a UNIQUIFIER value of 1 and the row size is 1017. If you notice, the “Record Attributes” of Slot 1 are also different, with the addition of “VARIABLE_COLUMNS.” This is because the UNIQUIFIER is stored as a variable column. The extra 8 bytes of overhead break down to 4 bytes to store the UNIQUIFIER, 2 bytes to store the variable column offset, and 2 bytes to store the variable count. The tables we created used all fixed-length columns; you may notice some minor overhead differences if your table already contains variable columns.

To summarize, there is indeed a difference in the page structure between a unique clustered index and a non-unique clustered index; however, there’s only a possible performance and space impact when storing duplicate clustered key values. So there you go, more detail than you ever wanted to know about clustered indexes and uniqueness!

Performance Considerations of Data Types

I’ve just finished my first real content for the PASS Performance SIG. I decided to write on “Performance Considerations of Data Types,” as I think this is one of the easiest and most overlooked topics in performance tuning. Here’s a summary:

Selecting inappropriate data types, especially on large tables with millions or billions of rows, can have significant performance implications. In this article, I’ll explain why and offer suggestions on how to select the most appropriate data type for your needs. The primary focus will be on common data types in SQL Server 2005 and 2008, but I’ll also discuss some aspects of clustered indexes and column properties. Most importantly, I’ll show some examples of common data-type misuse.

If you’re interested in this content, you can find it here: Performance Considerations of Data Types.

Special thanks to Paul Randal and Paul Nielsen for providing me with technical reviews and great feedback. You guys are awesome!

Thanks also to Mladen Prajdic and Jeremiah Peschka for their great input. You guys are awesome, too!

Page Internals – Investigation Proc

As many of you know, I like to crawl around in page internals in my free time. It can be very enlightening, or just a good check to make sure that what you think is happening, is actually happening. To help with this process, I’ve created myself a little stored procedure that I can simply pass a few parameters to and have it return the page data for me. So for those who don’t have anything better to do who are as interested in page internals as I am, here’s my proc:

Create Procedure dbo.dba_viewPageData_sp
 
        /* Declare Parameters */
          @databaseName varchar(128)
        , @tableName    varchar(128)    = Null -- database.schema.tableName
        , @indexName    varchar(128)    = Null
        , @fileNumber   int             = Null
        , @pageNumber   int             = Null
        , @printOption  int             = 3    -- 0, 1, 2, or 3
        , @pageType     char(4)         = 'Leaf' -- Leaf, Root, or IAM
 
As
/*********************************************************************************
    Name:       dba_viewPageData_sp
 
    Author:     Michelle Ufford
 
    Purpose:    Retrieves page data for the specified table/page.
 
    Notes:      Can pass either the table name or the pageID, but must pass one, or
                you'll end up with no results. 
                If the table name is passed, it will return the first page.
 
        @tableName must be '<databaseName>.<schemaName>.<tableName>' in order to
            function correctly for cross-database joins.  
 
        @printOption can be one of following values:
            0 - print just the page header
            1 - page header plus per-row hex dumps and a dump of the page slot array
            2 - page header plus whole page hex dump
            3 - page header plus detailed per-row interpretation
 
        Page Options borrowed from: 
        https://blogs.msdn.com/sqlserverstorageengine/archive/2006/06/10/625659.aspx
 
        @pageType must be one of the following values:
            Leaf - returns the first page of the leaf level of your index or heap
            Root - returns the root page of your index
            IAM - returns the index allocation map chain for your index or heap
 
        Conversions borrowed from:
        http://sqlskills.com/blogs/paul/post/Inside-The-Storage-Engine-
        sp_AllocationMetadata-putting-undocumented-system-catalog-views-to-work.aspx
 
    Called by:  DBA
 
    Date        User    Description
    ----------------------------------------------------------------------------
    2009-05-06  MFU     Initial release for public consumption
*********************************************************************************
    Exec dbo.dba_viewPageData_sp
          @databaseName = 'AdventureWorks'
        , @tableName    = 'AdventureWorks.Sales.SalesOrderDetail'
        , @indexName    = 'IX_SalesOrderDetail_ProductID'
        --, @fileNumber   = 1
        --, @pageNumber   = 38208
        , @printOption  = 3
        , @pageType     = 'Root';
*********************************************************************************/
 
Set NoCount On;
Set XACT_Abort On;
Set Ansi_Padding On;
Set Ansi_Warnings On;
Set ArithAbort On;
Set Concat_Null_Yields_Null On;
Set Numeric_RoundAbort Off;
 
Begin
 
    Declare @fileID         int
        , @pageID           int
        , @sqlStatement     nvarchar(1200)
        , @sqlParameters    nvarchar(255)
        , @errorMessage     varchar(100);
 
    Begin Try
 
        If @fileNumber Is Null And @pageNumber Is Null And @tableName Is Null
        Begin
            Set @errorMessage = 'You must provide either a file/page number, or a table name!';
            RaisError(@errorMessage, 16, 1);
        End;
 
        If @pageType Not In ('Leaf', 'Root', 'IAM')
        Begin
            Set @errorMessage = 'You have entered an invalid page type; valid options are "Leaf", "Root", or "IAM"';
            RaisError(@errorMessage, 16, 1);
        End;
 
        If @fileNumber Is Null Or @pageNumber Is Null
        Begin
 
            Set @sqlStatement = 
            Case When @pageType = 'Leaf' Then
                'Select Top 1 @p_fileID = Convert (varchar(6), Convert (int, 
                    SubString (au.first_page, 6, 1) +
                    SubString (au.first_page, 5, 1)))
                , @p_pageID = Convert (varchar(20), Convert (int, 
                     SubString (au.first_page, 4, 1) +
                     SubString (au.first_page, 3, 1) +
                     SubString (au.first_page, 2, 1) +
                     SubString (au.first_page, 1, 1)))'
            When @pageType = 'Root' Then
                'Select Top 1 @p_fileID = Convert (varchar(6), Convert (int, 
                    SubString (au.root_page, 6, 1) +
                    SubString (au.root_page, 5, 1)))
                , @p_pageID = Convert (varchar(20), Convert (int, 
                     SubString (au.root_page, 4, 1) +
                     SubString (au.root_page, 3, 1) +
                     SubString (au.root_page, 2, 1) +
                     SubString (au.root_page, 1, 1)))'
            When @pageType = 'IAM' Then
                'Select Top 1 @p_fileID = Convert (varchar(6), Convert (int, 
                    SubString (au.first_iam_page, 6, 1) +
                    SubString (au.first_iam_page, 5, 1)))
                , @p_pageID = Convert (varchar(20), Convert (int, 
                     SubString (au.first_iam_page, 4, 1) +
                     SubString (au.first_iam_page, 3, 1) +
                     SubString (au.first_iam_page, 2, 1) +
                     SubString (au.first_iam_page, 1, 1)))'
            End + 
            'From ' + QuoteName(ParseName(@databaseName, 1)) + '.sys.indexes AS i
            Join ' + QuoteName(ParseName(@databaseName, 1)) + '.sys.partitions AS p
                On i.[object_id] = p.[object_id]
                And i.index_id = p.index_id
            Join ' + QuoteName(ParseName(@databaseName, 1)) + '.sys.system_internals_allocation_units AS au
                On p.hobt_id = au.container_id
            Where p.[object_id] = Object_ID(@p_tableName)
                And au.first_page > 0x000000000000 ' 
                + Case When @indexName Is Null 
                    Then ';' 
                    Else 'And i.name = @p_indexName;' End;
 
            Set @sqlParameters = '@p_tableName varchar(128)
                                , @p_indexName varchar(128)
                                , @p_fileID int OUTPUT
                                , @p_pageID int OUTPUT';
 
            Execute sp_executeSQL @sqlStatement
                        , @sqlParameters
                        , @p_tableName = @tableName
                        , @p_indexName = @indexName
                        , @p_fileID = @fileID OUTPUT
                        , @p_pageID = @pageID OUTPUT;
 
            End
            Else
            Begin
                Select @fileID = @fileNumber
                    , @pageID = @pageNumber;
            End;
 
        DBCC TraceOn (3604);
        DBCC Page (@databaseName, @fileID, @pageID, @printOption);
        DBCC TraceOff (3604);
 
    End Try
    Begin Catch
 
        Print @errorMessage;
 
    End Catch;
 
    Set NoCount Off;
    Return 0;
End
Go

This proc does have cross-database support, i.e. you can install it in your DBA database and use it to investigate data in other databases. Here’s an example…

    Exec dbo.dba_viewPageData_sp
          @databaseName = 'AdventureWorks'
        , @tableName    = 'AdventureWorks.Sales.SalesOrderDetail'
        , @indexName    = 'IX_SalesOrderDetail_ProductID';

… will return a nice data page:

DBCC execution completed. If DBCC printed error messages, contact your system administrator.
 
PAGE: (1:11000)
 
BUFFER:
 
BUF @0x0391F140
 
bpage = 0x0C0C0000                   bhash = 0x00000000                   bpageno = (1:11000)
bdbid = 7                            breferences = 0                      bUse1 = 35177
bstat = 0x1c00009                    blog = 0x21212159                    bnext = 0x00000000

[waits for the "oohs" and "aahs" to subside...]

I also give you the option to specify a specific page, in case you want to follow the page trail (i.e. m_nextPage). I’m not really providing support for partitions, although I do have a little dirty piece of code to return a hobt with data if possible (i.e. “first_page > 0×000000000000″).

Update: Special thanks to Jeremiah Peschka and Adam Machanic for showing me
QUOTENAME(PARSENAME(@databaseName, 1))! :)

Also, thanks to Paul Randal for his excellent blog posts on this very topic! Check out his blog post on DBCC Page. The conversion code was borrowed from his sp_AllocationMetadata proc.

Update 2: I’ve made some quick changes based on some feedback from Twitter. Thanks for the suggestions and hopefully you enjoy the updates.

Happy crawling!

Iowa Code Camp Resources

Thanks to everyone who attended my session on “SQL Server for the .NET Developer” at yesterday’s Iowa Code Camp! I hope you found the session informative. Here’s the materials from the presentation:

Special thanks to @underwhelmed and @peschkaj for traveling to attend the Iowa Code Camp! :)

Open Giving

During the Iowa Code Camp, I had a chance to get involved with the Open Giving project. From the CodePlex, here’s a brief overview:

An effort to create a complete registration system to create events and for volunteers to register for those events. Helps make it easier for organizations to enable people to give back to their communities.

Basically, it’s an open source software project to create a *free* event management system for volunteer organizations. This is a really great cause and I’m glad I’ve had a chance to help. If you’re interested in helping, please leave me a message and I’ll put you in touch with the appropriate folks.

Iowa Code Camp – Tomorrow!

Just a reminder that the Iowa Code Camp is tomorrow! The camp is being held at Kirkwood College in Cedar Rapids, Iowa, and starts at 8am. There’s a lot of great sessions, and attendance is free! You can find more details on their website at http://iowacodecamp.com.

I’ll be presenting on “SQL Server for the .NET Developer.” I’ll be covering the basics of SQL Server, such as good table design, indexing strategies, and efficient queries.

The esteemed Jeremiah Peschka (@peschkaj) will be giving his presentation on “A Dynamic World Demands Dynamic SQL.” If you missed his AppDev SIG presentation, here’s your chance (and mine!) to see what you missed.

If you come to the Code Camp, make sure to stop by and say hi to me! :)