This time let’s talk about an interesting customer scenario where table partitioning was implemented on a table with 100 GB of data on SQL Server 2014. Partitioning in this context aimed to save disk space  (archive data were compressed) , help to reduce maintenance time and consumed resources as well (by using index and statistic maintenance operations on the active partition).  Finally, it will help to improve the queries performance on the concerned table that mainly focused on only the recent customer commands. Let’s say that it exists a few reporting queries that  are designed to get older data for analysis. So the partition strategy was pretty simple: my customer decided to archive and compress data older than the year 2015 and leave recent records (since 2015) in the active partition. We implemented partitionning respectively only on the two filegroups ARCHIVE and PRIMARY because this implementation was supposed to be temporary and the estimated data growth small and steady. This first implementation was successful in terms of performance and manageability

But of course this world is not perfect and the estimated size for the last year was higher than expected and my customer also wanted to archive data from last year in order to guarantee the application performance. In some ways, this is a good news because it means that we may guess that the business activity was better than expected but this is purely an assumption from my part 🙂

So let’s go back to my customer need and in order to meet this new requirement, we have to deal with the existing partition strategy and this is not as simple as we may imagine because changing the existing partition boundaries may be an intensive resource and locking operation. In addition, my customer wants this operation to be as fast as possible to avoid blocking too long his business activity. Let’s first to thank him because I know that almost of their requests are a challenging 🙂

Then let’s demonstrate with this similar scenario that I implemented to my lab environment which includes a bigTransactionHistory_stg table in the adventureWorks2012 database with the following specification:

exec sp_spaceused 'dbo.bigtransactionhistory_stg'

blog 80- 0 - bigTransactionHistory specs

exec sp_helpindex 'dbo.bigtransactionhistory_stg'

blog 80- 0 - bigTransactionHistory specs idx

My scenario does not include a very big table in size (1% of my customer’s data) but this is sufficient to point out the problems of changing an existing partition configuration.

So let’s begin by setting the scene: my table has a partitioned clustered index with the following information:

SELECT
       p.object_id,
       o.name AS table_name,
       p.partition_number,
       p.rows,
       au.total_pages,
       au.total_pages / 128 AS total_size_mb,
       au.type_desc,
	   p.data_compression_desc,
       g.name AS [filegroup_name],
       RVL.value AS left_range_boundary,
       RVR.value AS right_range_boundary
       --PF.[name], RV.boundary_id, RV.[value]
FROM sys.partitions AS p (nolock)
       LEFT JOIN sys.objects AS o (nolock)
             ON o.object_id = p.object_id
       LEFT JOIN sys.indexes i (nolock)
			 ON p.object_id = i.object_id
                    AND p.index_id = i.index_id
       LEFT JOIN sys.allocation_units AS au (nolock)
             ON p.hobt_id = au.container_id
       LEFT JOIN sys.filegroups AS g (nolock)
             ON g.data_space_id = au.data_space_id
       LEFT JOIN sys.partition_schemes AS PS (nolock)
             ON ps.data_space_id = i.data_space_id
       LEFT JOIN sys.partition_functions AS PF (nolock)
             ON PF.function_id = ps.function_id             
       LEFT JOIN sys.partition_range_values AS RVL (nolock)
             ON RVL.function_id = PF.function_id
                    AND RVL.boundary_id + 1 = p.partition_number
       LEFT JOIN sys.partition_range_values AS RVR (nolock)
             ON RVL.function_id = PF.function_id
                    AND RVR.boundary_id = p.partition_number
WHERE p.object_id = object_id('bigtransactionhistory_stg')
       AND p.index_id = 1
ORDER BY table_name, partition_number
GO

 

blog 80- 1 - bigTransactionHistory partition initial

As expected you may notice a simple partitioning stuff with only two partitions based on right range boundary:

  • ARCHIVE partition (ARCHIVE filegroup) includes records older than 01/01/2015. This partition is page compressed.
  • Current partition (PRIMARY filegroup) contains recent records from the year 2015.

Go ahead and let’s try to achieve the task of reorganizing correctly the current partition strategy in order to meet my customer scenario. The high level expected outcome is as follows:

blog 80- 1 - expected partitionning result

We want to dedicate one partition per year since 2015 and anticipate data growth. So we have to add three partitions FG2015, FG2016 for existing data and FG2017 that will be empty for now but we want to anticipate and avoid data movement scenario from any potential split scenario (according to the Microsoft documentation here).

Now the picture of the starting scenario:

blog 80- 21 - bigTransactionHistory 1 - init

At this point I may see two strategies. Let’s begin with the first one that is probably the most simple (and intuitive):

Firstly, in order to reorganize correctly the current partition configuration we have no choice and we have to exclude the primary filegroup for the partition strategy, so the game will consists in merging data from the primary filegroup with the ARCHIVE partition.

blog 80- 22 - bigTransactionHistory 1 - drop primary

Then we will have to add consecutively two filegroups FG2015 and FG2016 and move related data from the ARCHIVE partition to their respective partitions.

blog 80- 23 - bigTransactionHistory 1 - new partitions

And finally add the last empty partition FG2017.

blog 80- 24 - bigTransactionHistory 1 - last empty partition

The main drawback with this strategy is that it incurs data movement that may be a time and resource consuming operation. In this configuration, we have to split partition that already contain data. You can refer to the interesting article of David Peter Hansen here about data movement and split operation. This is the same story with the first MERGE operation here because as stated to Microsoft documentation here, merging a partition means dropping the partition with the concerned boundary point and moving data to other one.

Here the T-SQL script to simulate this first strategy:

-- STEP MERGE
USE AdventureWorks2012;
GO

ALTER PARTITION FUNCTION BigTransactionRange()
MERGE RANGE(N'2015-01-01T00:00:00.000');
GO


-- STEP SPLIT (2015)
ALTER DATABASE [AdventureWorks2012] 
ADD FILEGROUP [FG2015]
GO

ALTER DATABASE [AdventureWorks2012] 
ADD FILE ( NAME = N'FG2015', FILENAME = N'E:\SQLSERVER\INST14\DATA\AdventureWorks2012_2015.ndf' , SIZE = 1024KB , FILEGROWTH = 1024KB ) 
TO FILEGROUP [FG2015] 
GO

ALTER PARTITION SCHEME TransactionsPS1 NEXT USED [FG2015];
GO

ALTER PARTITION FUNCTION BigTransactionRange()
SPLIT RANGE(N'2015-01-01T00:00:00.000');
GO


-- STEP SPLIT (2016)
ALTER DATABASE [AdventureWorks2012] 
ADD FILEGROUP [FG2016]
GO
ALTER DATABASE [AdventureWorks2012] 
ADD FILE ( NAME = N'FG2016', FILENAME = N'E:\SQLSERVER\INST14\DATA\AdventureWorks2012_2016.ndf' , SIZE = 1024KB , FILEGROWTH = 1024KB ) 
TO FILEGROUP [FG2016] 
GO

ALTER PARTITION SCHEME TransactionsPS1 NEXT USED [FG2016];
GO

ALTER PARTITION FUNCTION BigTransactionRange()
SPLIT RANGE(N'2016-01-01T00:00:00.000');
GO

-- STEP SPLIZ (2007)
ALTER DATABASE [AdventureWorks2012] 
ADD FILEGROUP [FG2017]
GO
ALTER DATABASE [AdventureWorks2012] 
ADD FILE ( NAME = N'FG2017', FILENAME = N'E:\SQLSERVER\INST14\DATA\AdventureWorks2012_2017.ndf' , SIZE = 1024KB , FILEGROWTH = 1024KB ) 
TO FILEGROUP [FG2017] 
GO

ALTER PARTITION SCHEME TransactionsPS1 NEXT USED [FG2017];
GO

 

The following table describes each step of this strategy, their duration and the amount of records related to data movement in this case (LOP_INSERT_ROWS and LOP_DELETE_ROWS). Data movement are fully logged in the transaction log. Otherwise, there are other entries related in the transaction log but they are less relevant in this context and we want to focus only on the data movement stuff. However, let’s say that in the case of the MERGE operation, additional interesting entries type as LOP_COMPRESSION_INFO exist in the transaction log because the ARCHIVE partition is page-based compressed.

Step Duration LOP_INSERT_ROWS LOP_DELETE_ROWS
MERGE 00:01:49 8328657
SPLIT (2015) 00:02:44 8361367 8361248
SPLIT (2016) 00:01:02 3213023 3212971
SPLIT (2017) 00:00:00

 

An extra step is required in our case because all data came from the ARCHIVE partition and are compressed by default. This is an expected behaviour for the partition FG2015 but we have to inhibit the compression stuff for the current partition FG2016. In my case, the duration of the rebuild operation took 13 seconds.

ALTER INDEX [idx_bigtransactionhistory_transactiondate]
ON [dbo].[bigTransactionHistory_stg]
REBUILD PARTITION = 3 WITH (DATA_COMPRESSION = NONE);
GO

So we finally reach out the expected outcome as shown below:

blog 80- 25 - bigTransactionHistory 1 - outcome

At this first glance, the above strategy is probably not suitable for my customer because in his case, the duration and resources taken by each operation would be in a different order of magnitude and one of his requirement is that operation should be as fast as possible. After all, I only tested with 1% of the total of real customer data.

blog 80- 3 - jeu du taquinThis is why we have find out a different strategy to address this data movement phenomena. For that, we have to think differently how we can move data from one partition to other and this is where what I call the “jeu du taquin” based strategy comes into play and based on the SWITCH feature of the partitions.

 

 

Let’s apply the same principle to our context now let’s introduce an additional piece in the game: an additional table named bigTransactionHistory_stg_temp created on the PRIMARY filegroup that will serve to store temporary recent data (>= 2015) from the bigTransactionHistory_stg partitioned table. To achieve this task we may relay on the SWITCH command which is only a metadata operation that means no data movement in this case.

blog 80- 40- bitransactionhstory 2 - move primary

And this is the secret sauce here! Indeed, we are free to work on our partitioned table and prepare the new concerned right range boundaries without suffering data movement because the primary partition is empty!

blog 80- 41- bitransactionhstory 2 - drop primary

blog 80- 42- bitransactionhstory 2 - new partitions

At this point no data movement has occurred but the game becomes more complicated here because we have to redistribute data from the bigTransactionHistory_stg_temp table into new partitions FG2015 and FG2016. As a reminder, my temporary table is not partitioned, so we can’t benefit anymore from partitioning features but we may do other interesting things. Here my proposal: we may create two staging tables respectively on the FG2015 and FG2016 filegroups and then load data from the temporary table into the two staging tables in parallel. Finally, we may use the SWITCH partition feature in order to move quickly data from the staging table into the bigTransactionHistory_stg partitioned table.

blog 80- 43- bitransactionhstory 2 - bulk load staging tables

Before switching partitions from the staging tables to the final partitioned tables, we must before implement check constraints on the formers in order to guarantee to the optimizer that the concerned partitions don’t contain any value outside their boundaries.

blog 80- 44- bitransactionhstory 2 - switch partitionss

To finally reach out the expected outcome:

blog 80- 24 - bigTransactionHistory 1 - last empty partition

Here the T-SQL script to simulate this second partition strategy:

USE [AdventureWorks2012]
GO


CREATE TABLE [dbo].[bigTransactionHistory_stg_temp](
	[TransactionID] [int] NULL,
	[ProductID] [int] NOT NULL,
	[TransactionDate] [datetime] NULL,
	[Quantity] [int] NULL,
	[ActualCost] [money] NULL,
	[status] [bit] NULL
)
ON [PRIMARY]
GO
 
-- Create temporary table to store recent data
CREATE CLUSTERED INDEX [idx_bigtransactionhistory_transactiondate_temp] 
ON [dbo].[bigTransactionHistory_stg_temp]
(
	[TransactionDate] ASC,
	[TransactionID] ASC
)
ON [PRIMARY]


-- SWITCH data from the bigTransactionHistory_stg table (primary filegroup) 
-- to the bigTransactionHistory_stg_temp table (primary filegroup)
ALTER TABLE [dbo].[bigTransactionHistory_stg]
SWITCH PARTITION 2 TO [dbo].[bigTransactionHistory_stg_temp] 
GO


-- Get rid of the primary partition by merging the year 2015 boundary
ALTER PARTITION FUNCTION BigTransactionRange()
MERGE RANGE(N'2015-01-01T00:00:00.000');
GO

-- Preparation of new partitions
ALTER DATABASE [AdventureWorks2012] 
ADD FILEGROUP [FG2015]
GO
ALTER DATABASE [AdventureWorks2012] 
ADD FILE ( NAME = N'FG2015', FILENAME = N'E:\SQLSERVER\INST14\DATA\AdventureWorks2012_2015.ndf' , SIZE = 1048KB , FILEGROWTH = 1024KB ) 
TO FILEGROUP [FG2015] 
GO 

ALTER DATABASE [AdventureWorks2012] 
ADD FILEGROUP [FG2017]
GO
ALTER DATABASE [AdventureWorks2012] 
ADD FILE ( NAME = N'FG2017', FILENAME = N'E:\SQLSERVER\INST14\DATA\AdventureWorks2012_2017.ndf' , SIZE = 1048KB , FILEGROWTH = 1024KB ) 
TO FILEGROUP [FG2017] 
GO 

ALTER DATABASE [AdventureWorks2012] 
ADD FILEGROUP [FG2016]
GO
ALTER DATABASE [AdventureWorks2012] 
ADD FILE ( NAME = N'FG2016', FILENAME = N'E:\SQLSERVER\INST14\DATA\AdventureWorks2012_2016.ndf' , SIZE = 1048KB , FILEGROWTH = 1024KB ) 
TO FILEGROUP [FG2016] 
GO 


-- create new boundary 2015 --> FG2015
ALTER PARTITION SCHEME TransactionsPS1 NEXT USED [FG2015];
GO

ALTER PARTITION FUNCTION BigTransactionRange()
SPLIT RANGE (N'2015-01-01T00:00:00.000');
GO

-- create new boundary 2016 --> FG2016
ALTER PARTITION SCHEME TransactionsPS1 NEXT USED [FG2016];
GO

ALTER PARTITION FUNCTION BigTransactionRange()
SPLIT RANGE (N'2016-01-01T00:00:00.000');
GO

-- Split range with new boundary 2017 --> FG2017
ALTER PARTITION SCHEME TransactionsPS1 NEXT USED [FG2017];
GO

ALTER PARTITION FUNCTION BigTransactionRange()
SPLIT RANGE (N'2017-01-01T00:00:00.000');
GO

-- current (FG2016) and next partition (FG2017) are not compressed
ALTER INDEX [idx_bigtransactionhistory_transactiondate]
ON [dbo].[bigTransactionHistory_stg]
REBUILD PARTITION = 3 WITH (DATA_COMPRESSION = NONE);
GO

ALTER INDEX [idx_bigtransactionhistory_transactiondate]
ON [dbo].[bigTransactionHistory_stg]
REBUILD PARTITION = 4 WITH (DATA_COMPRESSION = NONE);
GO


-- Create staging tables
CREATE TABLE [dbo].[bigTransactionHistory_stg_2015](
	[TransactionID] [int] NULL,
	[ProductID] [int] NOT NULL,
	[TransactionDate] [datetime] NULL,
	[Quantity] [int] NULL,
	[ActualCost] [money] NULL,
	[status] [bit] NULL
)
ON [FG2015]
GO

CREATE CLUSTERED INDEX [idx_bigtransactionhistory_transactiondate_2015] 
ON [dbo].[bigTransactionHistory_stg_2015]
(
	[TransactionDate] ASC,
	[TransactionID] ASC
)
WITH (DATA_COMPRESSION = PAGE)
ON [FG2015]
GO


CREATE TABLE [dbo].[bigTransactionHistory_stg_2016](
	[TransactionID] [int] NULL,
	[ProductID] [int] NOT NULL,
	[TransactionDate] [datetime] NULL,
	[Quantity] [int] NULL,
	[ActualCost] [money] NULL,
	[status] [bit] NULL
)
ON [FG2016]
GO

CREATE CLUSTERED INDEX [idx_bigtransactionhistory_transactiondate_2016] 
ON [dbo].[bigTransactionHistory_stg_2016]
(
	[TransactionDate] ASC,
	[TransactionID] ASC
)
ON [FG2016]
GO


-- Load data into staging tables
INSERT INTO dbo.bigTransactionHistory_stg_2016 WITH (TABLOCK)
([TransactionID], [ProductID], [TransactionDate], [Quantity], [ActualCost], [status])
SELECT 
	[TransactionID], [ProductID], [TransactionDate], [Quantity], [ActualCost], [status]
FROM [dbo].[bigTransactionHistory_stg_temp] 
WHERE TransactionDate >= '20160101';
GO

INSERT INTO dbo.bigTransactionHistory_stg_2015 WITH (TABLOCK)
([TransactionID], [ProductID], [TransactionDate], [Quantity], [ActualCost], [status])
SELECT 
	[TransactionID], [ProductID], [TransactionDate], [Quantity], [ActualCost], [status]
FROM [dbo].[bigTransactionHistory_stg_temp] 
WHERE TransactionDate >= '20150101' AND TransactionDate < '20160101';
GO

-- Add check constraint to the staging tables before switching to the partitionned tables
ALTER TABLE dbo.bigTransactionHistory_stg_2015
ADD CONSTRAINT CHK CHECK (TransactionDate BETWEEN '20150101 00:00:00.000' AND '20151231 23:59:59.997' AND TransactionDate IS NOT NULL)

ALTER TABLE dbo.bigTransactionHistory_stg_2016
ADD CONSTRAINT CHK_2016 CHECK (TransactionDate BETWEEN '20160101 00:00:00.000' AND '20161231 23:59:59.997' AND TransactionDate IS NOT NULL)
GO

ALTER TABLE [dbo].[bigTransactionHistory_stg_2015]
SWITCH TO [dbo].[bigTransactionHistory_stg] PARTITION 2

ALTER TABLE [dbo].[bigTransactionHistory_stg_2016]
SWITCH TO [dbo].[bigTransactionHistory_stg] PARTITION 3

--DROP TABLE [dbo].[bigTransactionHistory_stg_temp];
--DROP TABLE [dbo].[bigTransactionHistory_stg_2015]
--DROP TABLE [dbo].[bigTransactionHistory_stg_2016]

Yes, I admit this is a more sophisticated process here but trust me this is the much more efficient in terms of performance because it includes only one real data movement during all the steps compared to the first strategy. In my case, executing the above script took roughly 40s, so 88% of improvement … not so bad 🙂

Happy partitioning!

By David Barbarin