Indexing for Partitioned Tables

So you’ve partitioned your table, and now you’re ready to performance tune. As with any table, indexing is a great place to start. And if you’re like most people new to partitioning, you probably created all of your indexes on the partitioned scheme, either by design or unintentionally.

Let’s take a step back here and discuss what a partitioned index is. A partitioned index, like a partitioned table, separates data into different physical structures (partitions) under one logical name. Specifically, each partition in a nonclustered index contains its own B-tree structure with a subset of rows, based upon the partitioning scheme. By default, an unpartitioned nonclustered index has just one partition.

Keep in mind, when you create an index on a partitioned table, i.e.

Create NonClustered Index IX_myIndex
    On dbo.myTable(myColumn);

… you are creating the index on the partitioned scheme by default. In order to create a NON-partitioned index on that same table, you would need to explicitly declare “On [FileGroup]“, i.e.

Create NonClustered Index IX_myIndex
    On dbo.myTable(myColumn)
    On [Primary];

 

But should you partition your index? That depends on how you use it. In fact, most environments will probably want to use a mix of partitioned and non-partitioned indexes. I’ve found that that partitioned indexes perform better when aggregating data or scanning partitions. Conversely, you’ll probably find that, if you need to locate a single, specific record, nothing performs better than a non-partitioned index on that column.

Let’s walk through some examples and see how they perform. I’ll bring back my trusty ol’ orders table for this.

/* Create a partition function. */
Create Partition Function 
    [test_monthlyDateRange_pf] (datetime)
    As Range Right For Values
    ('2009-01-01', '2009-01-08', '2009-01-15'
    , '2009-01-22', '2009-01-29');
Go
 
/* Associate the partition function with a partition scheme. */
Create Partition Scheme test_monthlyDateRange_ps
    As Partition test_monthlyDateRange_pf
    All To ([Primary]);
Go
 
/* Create a partitioned table. */
Create Table dbo.orders
(
      order_id  int Identity(1,1)   Not Null
    , orderDate datetime            Not Null
    , orderData smalldatetime       Not Null
 
    Constraint PK_orders Primary Key Clustered
    (
        order_id
      , orderDate
    )
) On test_monthlyDateRange_ps(orderDate);
Go
 
/* Create some records to play with. */
Set NoCount On;
 
Declare @endDate datetime = '2009-01-01';
 
While @endDate < '2009-02-01'
Begin
 
    Insert Into dbo.orders
    Select @endDate, @endDate;
 
    Set @endDate = DATEADD(minute, 1, @endDate);
 
End;
 
Set NoCount Off;
 
 
/* Let’s create an aligned, partitioned index. */
Create NonClustered Index IX_orders_aligned
    On dbo.orders(order_id)
    On test_monthlyDateRange_ps(orderDate); 
    /* you don't actually need to declare the last
       line of this unless you want to create the
       index on a different partitioning scheme.   */
 
/* Now let’s create an unpartitioned index. */
Create NonClustered Index IX_orders_unpartitioned
    On dbo.orders(order_id)
    On [Primary];

 

Now that we have both a partitioned and an unpartitioned index, let’s take a look at our sys.partitions table:

/* Let's take a look at our index partitions */
Select i.name
    , i.index_id
    , p.partition_number
    , p.rows
From sys.partitions As p
Join sys.indexes As i
    On p.object_id = i.object_id 
   And p.index_id = i.index_id
Where p.object_id = object_id('orders')
Order By i.index_id, p.partition_number;

 

sys.partitions

sys.partitions

As expected, both of our partitioned indexes, PK_orders and IX_orders_aligned, have 6 partitions, with a subset of rows on each partition. Our unpartitioned non-clustered index, IX_orders_unpartitioned, on the other hand has just 1 partition containing all of the rows.

Now that we have our environment set up, let’s run through some different queries and see the performance impact of each type of index.

/* Query 1, specific record look-up, covered */
Select order_id, orderDate
From dbo.orders With (Index(IX_orders_aligned))
Where order_id = 25000;
 
Select order_id, orderDate
From dbo.orders With (Index(IX_orders_unpartitioned))
Where order_id = 25000;

 

Query 1

Query 1

The unpartitioned index performs significantly better when given a specific record to look-up. Now let’s try the same query, but utilizing a scan instead of a seek:

/* Query 2, specific record look-up, uncovered */
Select order_id, orderDate, orderData
From dbo.orders With (Index(IX_orders_aligned))
Where order_id = 30000;
 
Select order_id, orderDate, orderData
From dbo.orders With (Index(IX_orders_unpartitioned))
Where order_id = 30000;

 

Query 2

Query 2

Again we see that the non-partitioned index performs better with the single-record look-up. This can lead to some pretty dramatic performance implications. So when *would* we want to use a partitioned index? Two instances immediately pop to mind. First, partition switching can only be performed when all indexes on a table are aligned. Secondly, partitioned indexes perform better when manipulating large data sets. To see this in action, let’s try some simple aggregation…

/* Query 3, aggregation */
Select Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime) As 'order_date'
    , Count(*)
From dbo.orders With (Index(IX_orders_aligned))
Where orderDate Between '2009-01-01' And '2009-01-07 23:59'
Group By Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime) 
Order By Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime);
 
Select Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime) As 'order_date'
    , Count(*)
From dbo.orders With (Index(IX_orders_unpartitioned))
Where orderDate Between '2009-01-01' And '2009-01-07 23:59'
Group By Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime) 
Order By Cast(Round(Cast(orderdate As float), 0, 1) As smalldatetime);

 

Query 3

Query 3

As you can see, partitioned indexes perform better when aggregating data. This is just a simple example, but the results can be even more dramatic in a large production environment. This is one of the reasons why partitioned tables and indexes are especially beneficial in data warehouses.

So now you have a general idea of what a partitioned index is and when to use a partitioned index vs a non-partitioned index. Ultimately, your indexing needs will depend largely on the application and how the data is used. When in doubt, test, test, test! So to recap…

  • Specify “On [FileGroup]“ to create an unpartitioned index on a partitioned table
  • Consider using non-partitioned indexes for single-record look-ups
  • Use partitioned indexes for multiple records and data aggregations
  • To enable partition switching, all indexes on the table must be aligned.

For more information on partitioning, check out my other partitioning articles:

Partitioning Example
Partitioning 101
Tips for Large Data Stores

Error Handling in T-SQL

Error handling is one of those things in SQL Server that just doesn’t get the attention it deserves. Even a properly constructed stored procedure can still result in error, such as primary key or unique constraint errors.

Why should you care? Consider this real-world example:

You’re a DBA monitoring a well-performing environment. You deploy a new application to production. Suddenly, performance degrades but you do not know why. You look in your error log and see a whole mess of primary key errors. Digging into your newly deployed application, you find that you are now making an extra (and unnecessary) insert to the database, which is resulting in error and causing your performance issues.

This is just one example of many. Fortunately, SQL 2005 has really simplified the error handling process with features such as the Try/Catch block.

The basic components of error handling are:

  • Try…Catch block (2005/2008)
  • Error identification
  • Transaction handling
  • Error logging (optional)
  • Error notification

As an early holiday gift, here’s a generic error handling process to get you started:

If ObjectProperty(Object_ID('dbo.dba_logError_sp'), N'IsProcedure') = 1
Begin
    Drop Procedure dbo.dba_logError_sp;
    Print 'Procedure dba_logError_sp dropped';
End;
Go
 
If ObjectProperty(Object_ID('dbo.dba_errorLog'), N'IsTable') Is Null
Begin
 
    Create Table dbo.dba_errorLog
    (         errorLog_id       int identity(1,1) 
            , errorType         char(3)     
                Constraint [DF_errorLog_errorType] Default 'sys' 
            , errorDate         datetime	
                Constraint [DF_errorLog_errorDate] Default(GetDate())
            , errorLine         int
            , errorMessage      nvarchar(4000)
            , errorNumber       int
            , errorProcedure    nvarchar(126)
            , procParameters    nvarchar(4000)
            , errorSeverity     int
            , errorState        int
            , databaseName      nvarchar(255)
        Constraint PK_errorLog_errorLogID Primary Key Clustered
        (
            errorLog_id	
        )
    );
 
    Print 'Table dba_errorLog created';
 
End;
Go
 
 
Set ANSI_Nulls On;
Set Ansi_Padding On;
Set Ansi_Warnings On;
Set ArithAbort On;
Set Concat_Null_Yields_Null On;
Set NoCount On;
Set Numeric_RoundAbort Off;
Set Quoted_Identifier On;
Go
 
Create Procedure dbo.dba_logError_sp
(
    /* Declare Parameters */
      @errorType            char(3)         = 'sys'
    , @app_errorProcedure   varchar(50)     = ''
    , @app_errorMessage     nvarchar(4000)  = ''
    , @procParameters       nvarchar(4000)  = ''
    , @userFriendly         bit             = 0
    , @forceExit            bit             = 1
    , @returnError          bit             = 1
)
As
/***************************************************************
    Name:       dba_logError_sp
 
    Author:     Michelle F. Ufford, http://sqlfool.com
 
    Purpose:    Retrieves error information and logs in the 
                        dba_errorLog table.
 
        @errorType = options are "app" or "sys"; "app" are custom 
                application errors, i.e. business logic errors;
                "sys" are system errors, i.e. PK errors
 
        @app_errorProcedure = stored procedure name, 
                needed for app errors
 
        @app_errorMessage = custom app error message
 
        @procParameters = optional; log the parameters that were passed
                to the proc that resulted in an error
 
        @userFriendly = displays a generic error message if = 1
 
        @forceExit = forces the proc to rollback and exit; 
                mostly useful for application errors.
 
        @returnError = returns the error to the calling app if = 1
 
    Called by:	Another stored procedure
 
    Date        Initials    Description
	----------------------------------------------------------------------------
    2008-12-16  MFU         Initial Release
****************************************************************
    Exec dbo.dba_logError_sp
        @errorType          = 'app'
      , @app_errorProcedure = 'someTableInsertProcName'
      , @app_errorMessage   = 'Some app-specific error message'
      , @userFriendly       = 1
      , @forceExit          = 1
      , @returnError        = 1;
****************************************************************/
 
Set NoCount On;
Set XACT_Abort On;
 
Begin
 
    /* Declare Variables */
    Declare	@errorNumber            int
            , @errorProcedure       varchar(50)
            , @dbName               sysname
            , @errorLine            int
            , @errorMessage         nvarchar(4000)
            , @errorSeverity        int
            , @errorState           int
            , @errorReturnMessage   nvarchar(4000)
            , @errorReturnSeverity  int
            , @currentDateTime      smalldatetime;
 
    Declare @errorReturnID Table (errorID varchar(10));
 
    /* Initialize Variables */
    Select @currentDateTime = GetDate();
 
    /* Capture our error details */
    If @errorType = 'sys' 
    Begin
 
        /* Get our system error details and hold it */
        Select 
              @errorNumber      = Error_Number()
            , @errorProcedure   = Error_Procedure()
            , @dbName           = DB_Name()
            , @errorLine        = Error_Line()
            , @errorMessage     = Error_Message()
            , @errorSeverity    = Error_Severity()
            , @errorState       = Error_State() ;
 
    End
    Else
    Begin
 
    	/* Get our custom app error details and hold it */
        Select 
              @errorNumber      = 0
            , @errorProcedure   = @app_errorProcedure
            , @dbName           = DB_Name()
            , @errorLine        = 0
            , @errorMessage     = @app_errorMessage
            , @errorSeverity    = 0
            , @errorState       = 0 ;
 
    End;
 
    /* And keep a copy for our logs */
    Insert Into dbo.dba_errorLog
    (
          errorType
        , errorDate
        , errorLine
        , errorMessage
        , errorNumber
        , errorProcedure
        , procParameters
        , errorSeverity
        , errorState
        , databaseName
	)
    OutPut Inserted.errorLog_id Into @errorReturnID
    Values
    (
          @errorType
        , @currentDateTime
        , @errorLine
        , @errorMessage
        , @errorNumber
        , @errorProcedure
        , @procParameters
        , @errorSeverity
        , @errorState
        , @dbName
    );
 
    /* Should we display a user friendly message to the application? */
    If @userFriendly = 1
        Select @errorReturnMessage = 'An error has occurred in the database (' + errorID + ')'
        From @errorReturnID;
    Else 
        Select @errorReturnMessage = @errorMessage;
 
    /* Do we want to force the application to exit? */
    If @forceExit = 1
        Select @errorReturnSeverity = 15
    Else
        Select @errorReturnSeverity = @errorSeverity;
 
    /* Should we return an error message to the calling proc? */
    If @returnError = 1
        Raiserror 
        (
              @errorReturnMessage
            , @errorReturnSeverity
            , 1
        ) With NoWait;
 
    Set NoCount Off;
    Return 0;
 
End
Go

 

You would then call this proc in the following manner:

Begin Try
 
    /* If a business logic error exists, then call this proc */
    If 1 != 1
        Execute dbo.dba_logError_sp 
              @errorType            = 'app'
            , @app_errorProcedure   = 'yourStoredProcedureName'
            , @app_errorMessage     = '1 does not equal 1!'
            , @forceExit            = 1;
 
    /* Start a new transaction */
    Begin Transaction;  
 
    /* Do something */
 
    /* If you have an open transaction, commit it */
    If @@TranCount > 0
        Commit Transaction;
 
End Try
Begin Catch
 
    /* Whoops, there was an error... rollback! */
    If @@TranCount > 0
        Rollback Transaction;
 
    /* Grab our proc parameters */
    Set @errorParameters = '@myVariable = ' + @myVariable;
 
    /* Return an error message and log it */
    Execute dbo.dba_logError_sp
        @procParameters = @errorParameters;
 
End Catch;

 

Some things to keep in mind:

  • Error handling is not a “one-size-fits-all” process. Make sure you’re handling the error appropriately for your environment.
  • Be careful when working with nested transactions; you can sometimes get unexpected results.
  • Only errors with a severity levels greater than 10 will be caught by the Catch block.
  • You can initiate an error within your stored procedure by using RaisError().

Happy coding holidays! :)

SQL Quiz, Part 2: Toughest Challenges

Chris Shaw posted a new SQL Quiz yesterday, asking “What are the largest challenges that you have faced in your career and how did you overcome those?” Brent Ozar tagged me when he responded to Chris’ challenge.

Can I super-size my SQL database?

When I left my position at a small logistics company in Indianapolis, I went from a couple of relatively small databases to a pretty large environment. I’m talking 10mm row tables to billion row tables, with comparable increases in transactional volumes. I quickly found that developing in a small environment can be very different from developing in a large one.

To improve my skills, I hit the internet. I discovered the value of community resources like sqlservercentral.com and the joys of reading technical blogs. I also became intimately acquainted with Books Online. Perhaps most helpful of all, I discovered how knowledgeable and talented some of my new colleagues were; if I didn’t know the answer, I sought out their advice.

It’s like a promotion but with longer hours and no pay increase.

A few years back, I was given responsibility for a pricey software deployment. I knew I was not qualified to lead the project, and even mentioned it to my boss, but there wasn’t really anyone else to do it. There was quite a bit of work to be done, and not a lot of time to do it. In addition to leading the project, learning the new software, and designing custom modules, I also needed to learn about SAP integration. And it didn’t help that all of the column names were in German!

In the past, I had always been a self-reliant, one-(wo)man army, but I knew I needed help. I asked for a team and was allowed to hand-pick individuals from different groups to form a deployment team. There is no doubt in my mind that I could not have completed the project without this team. They were knowledgeable, resourceful, hard working, and fun to be around. The last part might not sound like a big deal, but when you start working 80-100 hours a week, you really don’t want to be around someone obnoxious.

The project certainly had its ups and downs and its rough spots. Some were the result of inexperience, and some of the issues were completely outside of our control. Nonetheless, the project was a success, and I certainly learned a lot.

Tag! You’re it.

I’m tagging…

SQL Fool Happenings

It’s not plagiarism if I copy myself
I’m now a content contributor for the SQLServerPedia wiki!  So you may now begin to see some of my content on the wiki as well. If you’ve not heard of SQLServerPedia, you should check it out and consider contributing.

It’s like a party, but for geeks
I’m trying to start a PASS Chapter for the Eastern Iowa region. So if you’re in Cedar Rapids, Iowa City, or Waterloo, and are interested in attending or assisting with PASS, e-mail me at michelle @ sqlfool.com for more information.

I didn’t realize people still used IE
Thanks to Peter for informing me of the image distortion issue. Apparently Internet Explorer
is inferior to doesn’t handle image resizing as well as FireFox. The only solution I’ve found is to make 2 versions of each image: a smaller thumbnail and a larger “pop-up” image. I’ll work on updating past posts this week.

The more people that follow me, the better I feel about myself
Jason Massie rocked my world with his post, SQL Twitter n00bs #SQLPASS. I’ve now been using it for a couple of weeks and have found a lot of benefit in it. It helps that I try to only follow SQL or other techy, non-spamming people. If you’re new to Twitter, check out Brent Ozar‘s Twitter 101, which really helped shed some light on some of Twitter’s idiosyncrasies. Oh, and if you’d like to follow me on Twitter, you can find me at http://twitter.com/sqlfool.

SQL Tweaks and Tools That Make My Life Easier

It still surprises me how many people don’t know about some of the very things that make my job so much easier. So this next post is dedicated to sharing some of the tweaks and tools I’ve run across that will help anyone who works with SQL:

 

Indexes
Anyone who uses included columns is probably well aware of the frustrations that can come from having to look up information on which columns are included. I wrote a stored procedure, dba_indexLookup_sp, to help me with this, before discovering sp_helpindex2. If you haven’t heard of sp_helpindex2, it’s a re-write of sp_helpindex by Kimberly Tripp. You can find it on Kimberly’s blog. The main difference is Kimberly’s is a system stored procedure (mine is not) and my version returns partitioning information (Kimberly’s does not). Check both out and use whichever one meets your needs best.

 

KeyBoard ShortCuts

In SQL Server Management Studio (SSMS), click on:
    Tools –> Options… –> Environment –> Keyboard

Keyboard Shortcuts

Keyboard Shortcuts

For your copying convenience:

Ctrl+3   Select Top 100 * From
Ctrl+4   sp_tables @table_owner = ‘dbo’
Ctrl+5   sp_columns
Ctrl+6   sp_stored_procedures @sp_owner = ‘dbo’
Ctrl+7   sp_spaceused
Ctrl+8   sp_helptext
Ctrl+9   dba_indexLookup_sp or sp_helpindex2

Please note that these settings will not take effect until you open a new query window. Here’s an example of how you could use this: use Ctrl+4 to find a list of tables, then copy one into your query window; to view a sample of that table’s data, highlight the table name (I usually double-click on it) and press Ctrl+3. It’s a thing of beauty. Oh, and you may want to remove/change the schema filters if you use schemas other than dbo.

 

Query Execution Settings

After having one too many issues arise from non-DBA’s connecting to the production environment to run a devastating ad hoc, I’ve had all of our developers and analysts adopt the following settings. The only thing difference between my setting and theirs is that I have “Set Statistics IO” selected. FYI – you can also make these same setting changes in Visual Studio.

In SQL Server Management Studio (SSMS), click on:
    Tools –> Options… –> Query Execution –> SQL Server –> Advanced

Query Execution Settings

Query Execution Settings

 

Copy Behavior
This next tip actually has nothing to do with SQL Server, and can be done with any Microsoft product. However, I just learned about it a few weeks ago and already I use it quite frequently.

Holding down “Alt” while you drag your mouse will change your selection behavior to block selection.

Block Selection

Block Selection

 

Please note: The following tools requires SQL 2008 Management Studio. These tools will also work when you connect SQL 2008 SSMS to a 2005 instance.

 

Object Detail Explorer

Finally, there’s a reason to use the Object Detail Explorer! My favorite use is to quickly find the table size and row counts of all the tables in a database. If these options are not currently available, you may just need to right click on the column headers and add it to the display.

Object Detail Explorer

Object Detail Explorer

 

Missing Indexes

And lastly, when using SSMS 2008 to execute Display Estimated Query Plan (Ctrl+L), it will show you if you’re missing any indexes. This will even work if you connect SSMS 2008 to SQL 2005!

Missing Index

Missing Index

That pretty much covers it for now. HTH! :)

Michelle

BCP Basics

In this blog post, I’m going to walk through the basics of BCP (bulk copy program). BCP is a utility that installs with SQL Server and can assist with large data transfers.

Let’s see what parameter options are available to use. From the command line on a machine with SQL Server installed, type “bcp” and press Enter.

BCP Parameters

You can find out more information on BCP parameters on Books Online: http://msdn.microsoft.com/en-us/library/ms162802.aspx

For now, we’re going to examine just the basics. The simplest syntax of a BCP command is:

bcp
databaseName.Schema.TableName *or* “Query”
in, out, *or* queryout
-S ServerName\instanceName
-U userName -P password *or* -T
-c *or* -n *or* specify storage information for each column

Let’s look at these options in a little more detail:

databaseName.Schema.TableName *or* Query
You can specify either an entire table to copy or a query. The query should be surrounded in quotations and must also include the fully qualified table name.

in, out, *or* queryout
in = import, out = full table export, queryout = query to select data for export

-U userName -P password *or* -T
You can either specify a specific account to access SQL Server, or use -T to indicate Trusted Connection (i.e. Windows Authentication)

-c *or* -n *or* specify storage information for each column
-c indicates character data type, -n indicates native data type; if neither one is specified, by default you will be prompted for the data type for each column.

Now let’s put this together and run some BCP commands. All of these examples will use the AdventureWorks 2008 sample database.

First, let’s export an entire table. To do this, we’ll use the “out” parameter.

bcp AdventureWorks.Sales.SalesOrderDetail out
C:\bcp_outputTable.txt -SYourServerName -T -c

 

Export Table with BCP

Export Table with BCP - Results

I don’t normally export an entire table… or at least, not in one process. So let’s walk through what it would look like to export the same table using a query. This will use the “queryout” parameter.

 

bcp "Select SalesOrderID, SalesOrderDetailID, OrderQty, ProductID
From AdventureWorks.Sales.SalesOrderDetail" queryout
C:\bcp_outputQuery.txt -SYourServerName -T -c

 

Export Query with BCP

Export Query with BCP - Results

You’ll notice that the total duration for the query was shorter than for the full-table export. This is because we’re only exporting a few of the columns. This is important to keep in mind when bcp’ing data: you’ll get better performance if you only export the data elements that you actually need.

Now that we’ve exported some data, let’s walk through the process of importing this data. First, let’s create a table with a constraint that will result in some errors.

Create Table dbo.testBCPLoad
(
      SalesOrderID          int      Not Null
    , SalesOrderDetailID    int      Not Null
    , OrderQty              smallint Null
    , ProductID             int      Null
 
    Constraint PK_testBCPLoad
        Primary Key Clustered
        (SalesOrderID)
);

Now execute the BCP import command:

bcp sandbox.dbo.testBCPLoad in
C:\bcp_outputQuery.txt -SYourServername -T -c

 

Load Data with BCP

Load Data with BCP - Error

You should receive a Primary Key error. When you check your results in SQL Server, you should find no results loaded into the table. This is BCP’s default behavior.

Check Destination Table

Let’s change our constraint and try the same BCP command again:

Alter Table dbo.testBCPLoad
    Drop Constraint PK_testBCPLoad;
 
Alter Table dbo.testBCPLoad
    Add Constraint PK_testBCPLoad
    Primary Key Clustered
        (SalesOrderID, SalesOrderDetailID);

 

bcp sandbox.dbo.testBCPLoad in
C:\bcp_outputQuery.txt -SYourServername -T -c

You should now have the data loaded into your SQL Server destination table:

Import Data with BCP - Results

Destination Table

So there you have it, the basics of BCP! :)

A few BCP tips:

  • BCP commands are case-sensitive!
  • If you’re accessing the data across a WAN, perhaps via a VPN connection, try to remote desktop (mstsc) to the actual SQL Server to perform the BCP. If possible, keep the operation on the same local drive or even local network as the server; the less distance data needs to travel across a network, the faster BCP will perform.
  • If you need to copy large amounts of data (i.e. >100mm rows), try breaking the data into smaller chunks. This will help if you have an error during BCP (i.e. a PK error can rollback the entire import operation by default, although there are options that can change this behavior). When working with partitioned tables, I find it very efficient to segregate the data imported/exported by partition.
  • If you’re BCP’ing data into a new table, you can minimize impact on the server by waiting to create your indexes after all the data is loaded.
  • I like to construct my queries in SSMS, then copy them to BCP. Since the command-line utility does not support copy and pasting, I create a text file with my BCP command in NotePad, then save the command as a .cmd. To execute, just call the .cmd file.