Performance Comparison of Singleton, XML, and TVP Inserts

As promised, today I took a look at the performance of bulk inserts using XML and Table-Valued Parameters. I also compared it against singleton inserts to show the value in the bulk-insert approach.

My tests were pretty simple: insert 100 records using each method. Each test was executed 10 times to ensure consistency. The duration was recorded in microseconds.

The goal was to compare the performance of the inserts. Because I was executing this entire test within SQL Server, I had to isolate only the actual insert transactions and ignore everything else, such as the loading of the data; that work would normally be performed by the calling application.

So without further ado… screenshots of the Profiler traces: (click to enlarge)

TVP

Single Insert Method

TVP

XML Method

TVP

Table-Valued Parameter Method

Summary

Method Avg CPU Avg Reads Avg Writes Avg Duration (micro)
Singleton Method 3 202 0 13378
XML Method 0 222 0 3124
TVP Method 1 207 0 780

 

As expected, both the XML and the TVP method performed significantly better than the single-insert method. As hoped, the table-valued parameter arguably performed the best of all 3.

Large Updates on Replicated Tables

Late last night, I executed a 70mm update on a replicated table. This was a somewhat low priority update, so the primary goal (aside from the data change) was to avoid any issues on the publishing and subscribing servers, including replication latency errors. I have performed many large updates in the past, but this was the first on a replicated table.

To minimize impact, one of our system DBA’s suggested the use of a replicated stored procedure. Basically, instead of replicating each of the 70mm updates as a singleton transaction, the replicated stored procedure is called on the subscribing server, which then performs the bulk update locally. This was my first time using it and it worked beautifully.

Another of my colleagues, Jeff M., suggested the use of a control table. Normally, I would just output the last affected ID and update the script manually. However, this also worked so well that it will be adopted in all future update scripts.

Using the following pseudo-script and a replicated stored procedure, I was able to execute the update on 70mm records in 11.5 hours with *zero* impact on the servers and replication. I could’ve probably increased the batch size and reduced the execution time even further, but as I mentioned, this was a low priority update, so there was no need to push it.

This process should work equally well on non-replicated updates; merely replace the Execute statement with the actual update.

/************ Chunked Update Script with Control Table ************/
 
/* --------------------- Preparation Script --------------------- */
 
/* Note: Execute this section in a separate query window */
 
Use yourDatabase;
Go
 
Set NoCount On;
 
/* Create a control table to facilitate tweaking of parameters */
Create Table dbo.scratch_largeUpdate_control
(
      sizeOfBatch       int
    , waitForDelay      char(8)
    , minRecordToUpdate int 
    , maxRecordToUpdate int
);
 
/* Create your control data; you only want 1 row in this table */
Insert Into dbo.scratch_largeUpdate_control 
(sizeOfBatch, waitForDelay, minRecordToUpdate, maxRecordToUpdate)
Select 10000, '00:00:05', 40297132, 107459380;
 
/* Update Script */
Update dbo.scratch_largeUpdate_control
Set sizeOfBatch  = 100000
  , waitForDelay = '00:00:30';
 
 
/* ------------------------ Update Script ------------------------ */
Use yourDatabase;
Go
 
Set NoCount On;
 
Declare 
      @batchSize        int
    , @minID            int
    , @maxID            int
    , @procMinID        int
    , @procMaxID        int
    , @delay            char(8)
    , @statusMsg        varchar(1000);
 
Begin Try
 
    IF @@SERVERNAME Not In ('PRODSERVER')
        RAISERROR('Sorry, this cannot be executed here!', 16, 1);
 
    If Not Exists(
            Select object_id 
            From sys.objects 
            Where [name] = 'scratch_largeUpdate_control' 
            And type = 'U' )
        RaisError ('ERROR: Control table does not exist!', 16, 1)
            With NoWait;
    Else
        Select 
              @minID        = minRecordToUpdate 
            , @maxID        = maxRecordToUpdate 
            , @batchSize    = sizeOfBatch
            , @delay        = waitForDelay
        From dbo.scratch_largeUpdate_control With (NoLock);
 
    Set @statusMsg = 'Beginning update; batch size of ' 
        + Cast(@batchSize As varchar(10)) + ', delay of ' 
        + @delay + ' defined.  Estimate ' 
        + Cast((((@maxID - @minID) / @batchSize) + 1) As varchar(10)) 
        + ' iterations to be performed.'
 
    RaisError (@statusMsg, 10, 1) With NoWait;
 
    While @minID < @maxID
    Begin
 
        Select @procMinID = @minID
            , @procMaxID = (@minID + (@batchSize - 1));
 
        /* Execute actual update code here 
           OR 
           Call a replicated stored procedure, i.e. */
        Execute dbo.myReplicatedUpdateProc 
              @minRecordID = @procMinID
            , @maxRecordID = @procMaxID;
 
        Set @statusMsg = 
            'Updating records ' + Cast(@minID As varchar(10)) 
            + ' through ' + Cast((@minID + (@batchSize - 1)) 
            As varchar(10)) + '...';
 
        RaisError (@statusMsg, 10, 1) With NoWait;
 
        /* Update our control table with the last successfully
           updated record ID.  In the event of an error,
           we can start from here. */
        Update dbo.scratch_largeUpdate_control 
        Set minRecordToUpdate = @minID + @batchSize;
 
        Select @minID = @minID + @batchSize; 
        WaitFor Delay @delay; -- breather for the server
 
        /* Check to see if our control values have changed */
        If Not Exists(
            Select * 
            From dbo.scratch_largeUpdate_control With (NoLock) 
            Where @batchSize = sizeOfBatch And @delay = waitForDelay)
        Begin
 
            /* There was a change, so grab our new values */
            Select @batchSize = sizeOfBatch
                 , @delay = waitForDelay
            From dbo.scratch_largeUpdate_control With (NoLock)
 
            /* Print a status message with the new values */
            Set @statusMsg = 'Parameters changed:  batch size = ' 
                + Cast(@batchSize As varchar(10)) 
                + ', delay = ' + @delay;
 
            RaisError (@statusMsg, 10, 1) With NoWait;
 
        End
    End
 
    RaisError ('Success!', 10, 1) With NoWait;
 
End Try
/* Handle your errors */
Begin Catch
 
        Set @statusMsg = 'An error has occurred and the last '
                         + 'transaction has been rolled back. '
                         + 'Last record successfully updated was '
                         + 'record_id = ' 
                         + Cast((@minID + (@batchSize - 1)) 
                            As varchar(10));
 
        RaisError (@statusMsg, 16, 1) With NoWait;
 
        /* Return the error message */
        Select Error_Number()
            , Error_Procedure()
		    , DB_Name()
		    , Error_Line()
		    , Error_Message()
		    , Error_Severity()
		    , Error_State();
 
End Catch;
 
/* -------------------------- Clean-Up ----------------------------
Drop Table dbo.scratch_largeUpdate_control;
----------------------------------------------------------------- */

Row Concatenation in T-SQL

Have you ever needed to create a comma-delimited list of related records, grouped by a parent record? This is one of those business requirements that just happens to pop up every so often. For example, turning this:

AuthorID    BookName
1 “Great Expectations”
1 “David Copperfield”
1 “Bleak House”
2 “Divine Comedy”
3 “Pride and Prejudice”
3 “Mansfield Park”
   

into this:

AuthorID    ListOfBooks
1 “Great Expectations”, “David Copperfield”, “Bleak House”
2 “Divine Comedy”
3 “Pride and Prejudice”, “Mansfield Park”
   

There’s a handful of ways you can handle this, especially when dealing with small data sets. However, the problem becomes a bit more tricky when dealing with large record sets (i.e. hundreds of thousands or even millions of records). My first attempt at a solution used a CTE (common table expression). It did the job but was very slow. Looking for a better solution, I discovered the XML method.

To give you a quick example:

/* Create a table variable to play with */
Declare @myTable Table 
    (customerID int, textData varchar(10));
 
/* Populate some test rescords */
Insert Into @myTable
Select 1, 'abc' Union All
Select 1, 'def' Union All
Select 2, 'uvw' Union All
Select 2, 'xyz'
 
/* Just take a look at the data
   before we format it */
Select * From @myTable;
 
/* Let's take a look at what
   For XML Raw will return 
   for us */
Select textData
From @myTable
Where customerID = 1
Order By textData
For XML Raw;
 
/* Now consolidate the data, using
   the For XML Raw option to 
   concatenate the textData column */
Select customerID
        , Replace( Replace( Replace(
            (   Select textData
                From @myTable As a
                Where a.customerID = b.customerID
                Order By textData
                For XML Raw)
                , '"/><row textData="', ', ')
                , '<row textData="', '')
                , '"/>', '')
            As 'textData'
From @myTable b
Group By customerID;

This has become my default method for handling this report requirement. While discussing this with a colleague, he mentioned using an approach that was similar in design but used Cross Apply on the XML. Wanting to see which performed better, I ran the two following queries in the AdventureWorks sample database (2008):

/* Method 1 */
Select ProductsOrdered
    , Count(*) As 'salesOrders'
From (
    Select SalesOrderID
            , Replace( Replace( Replace(
                (   Select Top 10 ProductID
                    From Sales.SalesOrderDetail As sod With (NoLock)
                    Where soh.SalesOrderID = sod.SalesOrderID
                    Order By ProductID
                    For XML Raw)
                    , '"/><row ProductID="', ', ')
                    , '<row ProductID="', '')
                    , '"/>', '')
                As 'ProductsOrdered'
    From Sales.SalesOrderHeader As soh With (NoLock)
) x
Group By ProductsOrdered
Order By Count(*) Desc
Option (MaxDop 1);
 
/* Method 2 */
Select ProductsOrdered
    , Count(*) As 'salesOrders'
From (
    Select SalesOrderID
        , SubString(ProductsOrdered, 1, Len(ProductsOrdered) - 1) 
              As 'ProductsOrdered'
    From Sales.SalesOrderHeader As soh With (NoLock)
    Cross Apply (
                    Select Top 10 
                        Cast(ProductID As varchar(10)) + ', '
                    From Sales.SalesOrderDetail As sod With (NoLock)
                    Where sod.SalesOrderID = soh.SalesOrderID
                    Order By ProductID
                    For XML Path('')
                ) X(ProductsOrdered)
) x
Group By ProductsOrdered
Order By Count(*) Desc;

Here’s the results: (click to enlarge)

Row Concatenation Comparison

Row Concatenation Comparison

As evidenced in the image above, the first method has slightly higher CPU and double the duration. The 2nd method had almost double the writes and significantly more reads.

I was hoping for a clear winner, but apparently each method has its benefits. I’ll probably continue to stick with my original, more resource-friendly method (Method 1), unless someone suggests a better solution. :)

Max INT Identity Value Reached (DBCC CheckIdent)

One of my colleagues shared the following experience.

Recently, the identity column on one of his tables reached the maximum value allowable for an INT data type (2,147,483,647). Obviously, this caused all sorts of errors, as the application was unable to insert records into the table. This is a customer-facing table, so the issue needed to be resolved quickly. Converting the column to a BIGINT would require a significant amount of down-time. The solution? Reseed the identity column to -2,147,483,648 and have the values count upward toward zero. This took only seconds and the app was back up immediately. This is, of course, a temporary solution; it allows the team to schedule a more convenient time to resolve the issue.

Here’s how to do this:

DBCC CHECKIDENT ('tableName', RESEED, -2147483648);

Thanks, Peter, for sharing this tip!