Replication Bug with Partitioned Tables

Recently, we came across a bug in SQL Server 2005 on one of our production servers. Apparently, if you execute an ALTER TABLE statement on a replicated table with more than 128 partitions, the log reader will fail. A relatively obscure bug, I know. Microsoft has recognized this as a confirmed bug, but I couldn’t find it anywhere on the intertubes, thus the inspiration for this blog post. Microsoft’s official solution for this issue is to upgrade to SQL Server 2008.

For various reasons, we were unable to execute an upgrade at the time. And since this was a 2 terabyte database, we wanted to come up with a solution that wouldn’t involve reinitializing the entire publication. Our quick-fix while we were troubleshooting the issue was to create a linked server to the production box. Not ideal, I know, but it worked in a pinch and minimized exposure of the issue. Fortunately for us, we were able to solve the problem on the publication database pretty easily. All of the affected partition functions had empty partitions created several months in the future, so we simply merged any empty partition ranges for future dates. Our solution to our now-out-of-date subscribers was to apply static row filtering to any table with more than 100 million records. While this would introduce some overhead with the replication of these tables, it would allow us a much faster recovery time. We decided to use the start of the most recent hour as our filtering criteria, just to give us a “clean” filter, so we had to delete data from any table where we were going to apply the filter. After that, it was simply a matter of resuming replication.

All things considered, it took us a little over a day to recover from the issue. Most of that time was spent troubleshooting the problem and identifying a workable solution; actual execution of the changes was pretty quick. Moral of the story? Upgrade to SQL Server 2008. :)

Fragmentation on Replicated Tables

I’ve heard more than one person state that they do not need to defrag their tables, because they’re replicated and the source tables are defragged regularly. Let’s set this myth straight.

Repeat after me: defragmentation processes are not replicated.

Good. Thank you.

For those non-believers, let me prove it:

SET NOCOUNT ON;
 
/* Create publication db */
CREATE DATABASE sandbox_publisher;
GO
 
/* Create subscription db */
CREATE DATABASE sandbox_subscriber;
GO
 
/* Navigate to publisher db */
USE sandbox_publisher;
GO
 
/* Create a table to play with */
CREATE TABLE myTable
(
      myID UNIQUEIDENTIFIER
    , myColumn VARCHAR(2000)
 
    CONSTRAINT PK_myTable PRIMARY KEY CLUSTERED (myID)
);
/* NOTE: please never, ever use this
   type of clustered index in real-life.
   The whole purpose is to generate
   substantial fragmentation for 
   illustrative purposes. */
 
/* Populate it with some non-sequential
   records to generate fragmentation */
DECLARE @iteration INT;
SET @iteration = 0;
 
WHILE @iteration < 1000
BEGIN
    INSERT INTO myTable
    SELECT NewID(), REPLICATE('A', 2000);
 
    SET @iteration = @iteration + 1;
END
 
/* Now let's verify that we really did
   make a mess of our clustered index */
SELECT avg_fragmentation_in_percent
    , page_count
FROM sys.dm_db_index_physical_stats 
(DB_ID(), OBJECT_ID(N'MyTable'), NULL , NULL, N'Limited') 
OPTION (MaxDop 1);
 
/* Results:
 
avg_fragmentation_in_percent page_count
---------------------------- --------------------
99.0157480314961             508
 
99% fragmented, not too shabby!  Your results 
may vary slightly, based on the guid values
generated by NewID().
*/
 
/* Now, let's set up replication */
USE sandbox_publisher;
GO
 
/* Enable publication */
EXECUTE SP_REPLICATIONDBOPTION
      @dbname = N'sandbox_publisher'
    , @optname = N'publish'
    , @VALUE = N'true';
GO
 
/* Create our publication */
EXECUTE SP_ADDPUBLICATION
      @publication = N'myTestPublication' 
    , @sync_method = N'concurrent'
    , @description = N'Test Publication'
    , @status = N'active'
    , @independent_agent = N'true'
    , @immediate_sync = N'true'
    , @retention = 0
GO
 
/* Create our snapshot agent */
EXECUTE SP_ADDPUBLICATION_SNAPSHOT
      @publication = N'myTestPublication'
    , @frequency_type = 1;
 
/* Add our table as an article */
EXECUTE SP_ADDARTICLE
      @publication = N'myTestPublication'
    , @article = N'myTable'
    , @source_owner = N'dbo'
    , @source_object = N'myTable'
    , @type = N'logbased'
    , @destination_table = N'myTable'
    , @destination_owner = N'dbo'
    , @ins_cmd = N'CALL sp_MSins_dbomyTable'
    , @del_cmd = N'CALL sp_MSdel_dbomyTable'
    , @upd_cmd = N'MCALL sp_MSupd_dbomyTable';
GO
 
/* Generate an initial snapshot */
EXECUTE sp_startpublication_snapshot
      @publication = N'myTestPublication';
 
/* Create our subscription */
EXECUTE SP_ADDSUBSCRIPTION 
      @publication = N'myTestPublication'
    , @subscriber = N'YourServerName'
    , @destination_db = N'sandbox_subscriber';
 
/* At this point, basic transactional replication 
   should be configured.  You should now have a 
   copy of myTable in your subscription database. */
 
USE sandbox_subscriber;
GO
 
/* Let's take a look at fragmentation on the 
   subscription database. */
SELECT avg_fragmentation_in_percent
    , page_count
FROM sys.dm_db_index_physical_stats 
(DB_ID(), OBJECT_ID(N'MyTable'), NULL , NULL, N'Limited') 
OPTION (MaxDop 1);
 
/* Results:
 
avg_fragmentation_in_percent page_count
---------------------------- --------------------
0.29940119760479             334
 
Less than 1% fragmented!  Why is this?  During
initialization, a copy of the schema and data
is sent to the subscriber.  The data is sorted
before insertion into the destination table.
*/
 
USE sandbox_publisher;
GO
 
/* Let's insert another 1000 records and see
   what happens... */
DECLARE @iteration INT;
SET @iteration = 0;
 
WHILE @iteration < 1000
BEGIN
    INSERT INTO sandbox_publisher.dbo.myTable
    SELECT NewID(), REPLICATE('A', 2000);
 
    SET @iteration = @iteration + 1;
END
 
SELECT avg_fragmentation_in_percent
    , page_count
FROM sys.dm_db_index_physical_stats 
(DB_ID(), OBJECT_ID(N'MyTable'), NULL , NULL, N'Limited') 
OPTION (MaxDop 1);
 
/* Results:
 
avg_fragmentation_in_percent page_count
---------------------------- --------------------
98.6193293885602             1014
 
The publisher is still very fragmented 
(as expected) */
 
USE sandbox_subscriber;
GO
 
SELECT avg_fragmentation_in_percent
    , page_count
FROM sys.dm_db_index_physical_stats 
(DB_ID(), OBJECT_ID(N'MyTable'), NULL , NULL, N'Limited') 
OPTION (MaxDop 1);
 
/* Results:
 
avg_fragmentation_in_percent page_count
---------------------------- --------------------
95.4034729315628             979
 
But now the subscriber is fragmented too!  This
is because each individual transaction is 
replicated, so out-of-order inserts affect
both the publishing and the subcribing 
databases equally.
*/
 
/* Finally, let's rebuild our index on the 
   publisher and see what happens */
USE sandbox_publisher;
GO
 
ALTER INDEX PK_myTable ON myTable Rebuild;
 
SELECT avg_fragmentation_in_percent
    , page_count
FROM sys.dm_db_index_physical_stats 
(DB_ID(), OBJECT_ID(N'MyTable'), NULL , NULL, N'Limited') 
OPTION (MaxDop 1);
 
/* Results:
 
avg_fragmentation_in_percent page_count
---------------------------- --------------------
0                            667
 
0% fragmentation in the publishing table!  
What about our subscribing table?
*/
 
USE sandbox_subscriber;
GO
 
SELECT avg_fragmentation_in_percent
    , page_count
FROM sys.dm_db_index_physical_stats 
(DB_ID(), OBJECT_ID(N'MyTable'), NULL , NULL, N'Limited') 
OPTION (MaxDop 1);
 
/* Results:
 
avg_fragmentation_in_percent page_count
---------------------------- --------------------
95.4034729315628             979
 
As you can see, the subcribing table was
completely unaffected by the defrag
activities on the publisher.
*/

So there you have it, proof that defrag operations are not replicated! Now go defrag your replicated tables. :)

Oh, and if you need help defragging your indexes, check out my index defrag script (2005, 2008).

Replication Monitor

In my last blog post, I provided a script to view replication latencyIan Kirk took the script and ran with it, adding centralized execution and permanent logging. I’ve tweaked it a little bit further and deployed to production. So far, so good.

Here’s the latest and greatest for those interested:

If Object_ID('dbo.dba_replicationMonitor') Is Null
Begin
    Create Table dbo.dba_replicationMonitor
    ( 
          monitor_id            int Identity(1,1)   Not Null
        , monitorDate           smalldatetime       Not Null 
        , publicationName       sysname             Not Null
        , publicationDB         sysname             Not Null
        , iteration             int                 Null
        , tracer_id             int                 Null
        , distributor_latency   int                 Null
        , subscriber            varchar(1000)       Null
        , subscriber_db         varchar(1000)       Null
        , subscriber_latency    int                 Null
        , overall_latency       int                 Null 
    );
End;
 
If ObjectProperty(Object_ID('dbo.dba_replicationLatencyMonitor_sp'), N'IsProcedure') = 1
Begin
    Drop Procedure dbo.dba_replicationLatencyMonitor_sp;
    Print 'Procedure dba_replicationLatencyMonitor_sp dropped';
End;
Go
 
Set Quoted_Identifier On
Go
Set ANSI_Nulls On
Go
 
Create Procedure dbo.dba_replicationLatencyMonitor_sp
 
        /* Declare Parameters */
          @publicationToTest    sysname        = N'yourPublicationName'
        , @publicationDB        sysname        = N'yourPublicationDB'
        , @replicationDelay     varchar(10)    = N'00:00:30'
        , @iterations           int            = 5
        , @iterationDelay       varchar(10)    = N'00:00:30'
        , @displayResults       bit            = 0
        , @deleteTokens         bit            = 1
As
/*********************************************************************************
    Name:       dba_replicationLatencyMonitor_sp
 
    Author:     Michelle F. Ufford
 
    Purpose:    Retrieves the amount of replication latency in seconds
 
    Notes:      Default settings will run 1 test every minute for 5 minutes.
 
                @publicationToTest = defaults to yourPublicationName publication
 
                @publicationDB = the database that is the source for the publication.
				    The tracer procs are found in the publishing DB.
 
                @replicationDelay = how long to wait for the token to replicate;
                    probably should not set to anything less than 10 (in seconds)
 
                @iterations = how many tokens you want to test
 
                @iterationDelay = how long to wait between sending test tokens
                    (in seconds)
 
                @displayResults = print results to screen when complete
 
                @deleteTokens = whether you want to retain tokens when done
 
    Called by:  DBA
 
    Date        Initials    Description
    ----------------------------------------------------------------------------
    2008-11-20   MFU        Initial Release
    2008-11-24	 ILK        Tweaked to allow for centralized execution 
                            Replaced temp table with permanent table.
    2008-11-25   MFU        More tweaking, added publication data to 
                            dba_replicationMonitor, fixed NULL latency data,
                            moved dba_replicationMonitor creation out of proc
*********************************************************************************
    Exec dbo.dba_replicationLatencyMonitor_sp
          @publicationToTest    = N'myTestPublication'
        , @publicationDB        = N'sandbox_publisher'
        , @replicationDelay     = N'00:00:05'
        , @iterations           = 1
        , @iterationDelay       = N'00:00:05'
        , @displayResults       = 1
        , @deleteTokens         = 1;
*********************************************************************************/
 
Set NoCount On;
Set XACT_Abort On;
 
Begin
 
    /* Declare Variables */
    Declare @currentIteration   int
          , @tokenID            bigint
          , @currentDateTime    smalldatetime
          , @sqlStatement       nvarchar(200)
          , @parmDefinition		nvarchar(500);
 
    Declare @tokenResults Table
    ( 
          iteration             int             Null
        , tracer_id             int             Null
        , distributor_latency   int             Null
        , subscriber            varchar(1000)   Null
        , subscriber_db         varchar(1000)   Null
        , subscriber_latency    int             Null
        , overall_latency       int             Null 
    );
 
    /* Initialize our variables */
    Select @currentIteration = 0
         , @currentDateTime  = GetDate();
 
    While @currentIteration < @iterations
    Begin
 
		/* Prepare the stored procedure execution string */
		Set @sqlStatement = N'Execute ' + @publicationDB + N'.sys.sp_postTracerToken ' + 
							N'@publication = @VARpublicationToTest , ' +
							N'@tracer_token_id = @VARtokenID OutPut;'
 
		/* Define the parameters used by the sp_ExecuteSQL later */
		Set @parmDefinition = N'@VARpublicationToTest sysname, ' +
			N'@VARtokenID bigint OutPut';
 
        /* Insert a new tracer token in the publication database */
        Execute sp_executesql 
              @sqlStatement
            , @parmDefinition
            , @VARpublicationToTest = @publicationToTest
            , @VARtokenID = @TokenID OutPut;
 
        /* Give a few seconds to allow the record to reach the subscriber */
        WaitFor Delay @replicationDelay;
 
        /* Prepare our statement to retrieve tracer token data */
        Select @sqlStatement = 'Execute ' + @publicationDB + '.sys.sp_helpTracerTokenHistory ' +
                    N'@publication = @VARpublicationToTest , ' +
                    N'@tracer_id = @VARtokenID'
            , @parmDefinition = N'@VARpublicationToTest sysname, ' +
                    N'@VARtokenID bigint';
 
        /* Store our results for retrieval later */
        Insert Into @tokenResults
        (
            distributor_latency
          , subscriber
          , subscriber_db
          , subscriber_latency
          , overall_latency
        )
        Execute sp_executesql 
              @sqlStatement
            , @parmDefinition
            , @VARpublicationToTest = @publicationToTest
            , @VARtokenID = @TokenID;
 
        /* Assign the iteration and token id to the results for easier investigation */
        Update @tokenResults
        Set iteration = @currentIteration + 1
          , tracer_id = @tokenID
        Where iteration Is Null;
 
        /* Wait for the specified time period before creating another token */
        WaitFor Delay @iterationDelay;
 
        /* Avoid endless looping... :) */
        Set @currentIteration = @currentIteration + 1;
 
    End;
 
    /* Display our results */
    If @displayResults = 1
    Begin
        Select 
              iteration
            , tracer_id
            , IsNull(distributor_latency, 0) As 'distributor_latency'
            , subscriber
            , subscriber_db
            , IsNull(subscriber_latency, 0) As 'subscriber_latency'
            , IsNull(overall_latency, 
                IsNull(distributor_latency, 0) + IsNull(subscriber_latency, 0))
                As 'overall_latency'
        From @tokenResults;
    End;
 
    /* Store our results */
    Insert Into dbo.dba_replicationMonitor
    (
          monitorDate
        , publicationName
        , publicationDB
        , iteration
        , tracer_id
        , distributor_latency
        , subscriber
        , subscriber_db
        , subscriber_latency
        , overall_latency
    )
    Select 
          @currentDateTime
        , @publicationToTest
        , @publicationDB
        , iteration
        , tracer_id
        , IsNull(distributor_latency, 0)
        , subscriber
        , subscriber_db
        , IsNull(subscriber_latency, 0)
        , IsNull(overall_latency, 
            IsNull(distributor_latency, 0) + IsNull(subscriber_latency, 0))
    From @tokenResults;
 
    /* Delete the tracer tokens if requested */
    If @deleteTokens = 1
    Begin
 
        Select @sqlStatement = 'Execute ' + @publicationDB + '.sys.sp_deleteTracerTokenHistory ' +
                    N'@publication = @VARpublicationToTest , ' +
                    N'@cutoff_date = @VARcurrentDateTime'
            , @parmDefinition = N'@VARpublicationToTest sysname, ' +
                    N'@VARcurrentDateTime datetime';
 
        Execute sp_executesql 
              @sqlStatement
            , @parmDefinition
            , @VARpublicationToTest = @publicationToTest
            , @VARcurrentDateTime = @currentDateTime;
 
    End;
 
    Set NoCount Off;
    Return 0;
End
Go
 
Set Quoted_Identifier Off;
Go
Set ANSI_Nulls On;
Go

 

Note: All of my stored procedures have standardized error handling that I remove before posting to avoid confusion; you may want to implement your own error handling.

Checking Replication Latency with T-SQL

This post may only appeal to small subset of DBA’s. :)

This particular script was born of a need to view replication latency in production. I had to investigate whether the replication latency issues we were experiencing in production were occurring on the publisher (us) or the subscriber (them), and address accordingly. “Why just not use Replication Monitor?”, you might ask. Good question! I would definitely use Replication Monitor if it were available to me; however, I do not have the necessary permissions to the Distributor. :(

Fortunately, SQL 2005 provides us with some tools to help: sp_postTracerToken and sp_helpTracerTokenHistory. This allows us to access the same Tracer Tokens feature that is in Replication Monitor.


Replication Monitor - Tracer Tokens

 

So that’s the backstory, and here’s the script I came up with. For those interested, it’d be pretty easy to modify this script to use a permanent table and regularly log replication latency.

Create Procedure dbo.dba_replicationLatencyGet_sp
 
        /* Declare Parameters */
          @publicationToTest sysname        = N'yourPublicationName'
        , @replicationDelay  varchar(10)    = N'00:00:30'
        , @iterations        int            = 5
        , @iterationDelay    varchar(10)    = N'00:00:30'
        , @deleteTokens      bit            = 1
        , @deleteTempTable   bit            = 1
As
/*********************************************************************************
    Name:       dba_replicationLatencyGet_sp
 
    Author:     Michelle F. Ufford
 
    Purpose:    Retrieves the amount of replication latency in seconds
 
    Notes:      Default settings will run 1 test every minute for 5 minutes.
 
                @publicationToTest = change the default to your publication
 
                @replicationDelay = how long to wait for the token to replicate;
                    probably should not set to anything less than 10 (in seconds)
 
                @iterations = how many tokens you want to test
 
                @iterationDelay = how long to wait between sending test tokens
                    (in seconds)
 
                @deleteTokens = whether you want to retain tokens when done
 
                @deleteTempTable = whether or not to retain the temporary table
                    when done.  Data stored to ##tokenResults; set @deleteTempTable 
                    flag to 0 if you do not want to delete when done.
 
    Called by:  DBA
 
    Date        Initials    Description
    ----------------------------------------------------------------------------
    2008-11-20   MFU        Initial Release
*********************************************************************************
    Exec dbo.dba_replicationLatencyGet_sp
          @publicationToTest    = N'yourPublicationName'
        , @replicationDelay     = N'00:00:05'
        , @iterations           = 1
        , @iterationDelay       = N'00:00:05'
        , @deleteTokens         = 1
        , @deleteTempTable      = 1;
*********************************************************************************/
 
Set NoCount On;
Set XACT_Abort On;
 
Begin
 
    /* Declare Variables */
    Declare @currentIteration   int
          , @tokenID            bigint
          , @currentDateTime    smalldatetime;
 
    If Object_ID('tempdb.dbo.##tokenResults') Is Null
    Begin
        Create Table ##tokenResults
                        ( iteration           int             Null
                        , tracer_id           int             Null
                        , distributor_latency int             Null
                        , subscriber          varchar(1000)   Null
                        , subscriber_db       varchar(1000)   Null
                        , subscriber_latency  int             Null
                        , overall_latency     int             Null );
    End;
 
    /* Initialize our variables */
    Select @currentIteration = 0
         , @currentDateTime  = GetDate();
 
    While @currentIteration < @iterations
    Begin
 
        /* Insert a new tracer token in the publication database */
        Execute sys.sp_postTracerToken 
          @publication = @publicationToTest,
          @tracer_token_id = @tokenID OutPut;
 
        /* Give a few seconds to allow the record to reach the subscriber */
        WaitFor Delay @replicationDelay;
 
        /* Store our results in a temp table for retrieval later */
        Insert Into ##tokenResults
        (
            distributor_latency
          , subscriber
          , subscriber_db
          , subscriber_latency
          , overall_latency
        )
        Execute sys.sp_helpTracerTokenHistory @publicationToTest, @tokenID;
 
        /* Assign the iteration and token id to the results for easier investigation */
        Update ##tokenResults
        Set iteration = @currentIteration + 1
          , tracer_id = @tokenID
        Where iteration Is Null;
 
        /* Wait for the specified time period before creating another token */
        WaitFor Delay @iterationDelay;
 
        /* Avoid endless looping... :) */
        Set @currentIteration = @currentIteration + 1;
 
    End;
 
    Select * From ##tokenResults;
 
    If @deleteTempTable = 1
    Begin
        Drop Table ##tokenResults;
    End;
 
    If @deleteTokens = 1
    Begin
       Execute sp_deleteTracerTokenHistory @publication = @publicationToTest, @cutoff_date = @currentDateTime;
    End;
 
    Set NoCount Off;
    Return 0;
End
Go

Large Updates on Replicated Tables

Late last night, I executed a 70mm update on a replicated table. This was a somewhat low priority update, so the primary goal (aside from the data change) was to avoid any issues on the publishing and subscribing servers, including replication latency errors. I have performed many large updates in the past, but this was the first on a replicated table.

To minimize impact, one of our system DBA’s suggested the use of a replicated stored procedure. Basically, instead of replicating each of the 70mm updates as a singleton transaction, the replicated stored procedure is called on the subscribing server, which then performs the bulk update locally. This was my first time using it and it worked beautifully.

Another of my colleagues, Jeff M., suggested the use of a control table. Normally, I would just output the last affected ID and update the script manually. However, this also worked so well that it will be adopted in all future update scripts.

Using the following pseudo-script and a replicated stored procedure, I was able to execute the update on 70mm records in 11.5 hours with *zero* impact on the servers and replication. I could’ve probably increased the batch size and reduced the execution time even further, but as I mentioned, this was a low priority update, so there was no need to push it.

This process should work equally well on non-replicated updates; merely replace the Execute statement with the actual update.

/************ Chunked Update Script with Control Table ************/
 
/* --------------------- Preparation Script --------------------- */
 
/* Note: Execute this section in a separate query window */
 
Use yourDatabase;
Go
 
Set NoCount On;
 
/* Create a control table to facilitate tweaking of parameters */
Create Table dbo.scratch_largeUpdate_control
(
      sizeOfBatch       int
    , waitForDelay      char(8)
    , minRecordToUpdate int 
    , maxRecordToUpdate int
);
 
/* Create your control data; you only want 1 row in this table */
Insert Into dbo.scratch_largeUpdate_control 
(sizeOfBatch, waitForDelay, minRecordToUpdate, maxRecordToUpdate)
Select 10000, '00:00:05', 40297132, 107459380;
 
/* Update Script */
Update dbo.scratch_largeUpdate_control
Set sizeOfBatch  = 100000
  , waitForDelay = '00:00:30';
 
 
/* ------------------------ Update Script ------------------------ */
Use yourDatabase;
Go
 
Set NoCount On;
 
Declare 
      @batchSize        int
    , @minID            int
    , @maxID            int
    , @procMinID        int
    , @procMaxID        int
    , @delay            char(8)
    , @statusMsg        varchar(1000);
 
Begin Try
 
    IF @@SERVERNAME Not In ('PRODSERVER')
        RAISERROR('Sorry, this cannot be executed here!', 16, 1);
 
    If Not Exists(
            Select object_id 
            From sys.objects 
            Where [name] = 'scratch_largeUpdate_control' 
            And type = 'U' )
        RaisError ('ERROR: Control table does not exist!', 16, 1)
            With NoWait;
    Else
        Select 
              @minID        = minRecordToUpdate 
            , @maxID        = maxRecordToUpdate 
            , @batchSize    = sizeOfBatch
            , @delay        = waitForDelay
        From dbo.scratch_largeUpdate_control With (NoLock);
 
    Set @statusMsg = 'Beginning update; batch size of ' 
        + Cast(@batchSize As varchar(10)) + ', delay of ' 
        + @delay + ' defined.  Estimate ' 
        + Cast((((@maxID - @minID) / @batchSize) + 1) As varchar(10)) 
        + ' iterations to be performed.'
 
    RaisError (@statusMsg, 10, 1) With NoWait;
 
    While @minID < @maxID
    Begin
 
        Select @procMinID = @minID
            , @procMaxID = (@minID + (@batchSize - 1));
 
        /* Execute actual update code here 
           OR 
           Call a replicated stored procedure, i.e. */
        Execute dbo.myReplicatedUpdateProc 
              @minRecordID = @procMinID
            , @maxRecordID = @procMaxID;
 
        Set @statusMsg = 
            'Updating records ' + Cast(@minID As varchar(10)) 
            + ' through ' + Cast((@minID + (@batchSize - 1)) 
            As varchar(10)) + '...';
 
        RaisError (@statusMsg, 10, 1) With NoWait;
 
        /* Update our control table with the last successfully
           updated record ID.  In the event of an error,
           we can start from here. */
        Update dbo.scratch_largeUpdate_control 
        Set minRecordToUpdate = @minID + @batchSize;
 
        Select @minID = @minID + @batchSize; 
        WaitFor Delay @delay; -- breather for the server
 
        /* Check to see if our control values have changed */
        If Not Exists(
            Select * 
            From dbo.scratch_largeUpdate_control With (NoLock) 
            Where @batchSize = sizeOfBatch And @delay = waitForDelay)
        Begin
 
            /* There was a change, so grab our new values */
            Select @batchSize = sizeOfBatch
                 , @delay = waitForDelay
            From dbo.scratch_largeUpdate_control With (NoLock)
 
            /* Print a status message with the new values */
            Set @statusMsg = 'Parameters changed:  batch size = ' 
                + Cast(@batchSize As varchar(10)) 
                + ', delay = ' + @delay;
 
            RaisError (@statusMsg, 10, 1) With NoWait;
 
        End
    End
 
    RaisError ('Success!', 10, 1) With NoWait;
 
End Try
/* Handle your errors */
Begin Catch
 
        Set @statusMsg = 'An error has occurred and the last '
                         + 'transaction has been rolled back. '
                         + 'Last record successfully updated was '
                         + 'record_id = ' 
                         + Cast((@minID + (@batchSize - 1)) 
                            As varchar(10));
 
        RaisError (@statusMsg, 16, 1) With NoWait;
 
        /* Return the error message */
        Select Error_Number()
            , Error_Procedure()
		    , DB_Name()
		    , Error_Line()
		    , Error_Message()
		    , Error_Severity()
		    , Error_State();
 
End Catch;
 
/* -------------------------- Clean-Up ----------------------------
Drop Table dbo.scratch_largeUpdate_control;
----------------------------------------------------------------- */