Tuesday, 14 October 2014

Automating Sliding Partition Windows - Part 4

Recurring Tasks

In the previous posts of this series I both explained the concept of the sliding window partition as well as the way I have configured my system for an automated sliding window. This post will delve into the automated part of the process, the tasks that recur on a regular, in my case monthly, basis.

I have decided to use an SSIS package to execute the various steps of the sliding window, but it is just as valid to wrap the steps into cursors or loops (obviously, you only need a loop if you have more than one table, partition scheme and/or function in your database). Depending on how much you plan on automating there are some challenges in using SSIS. I will be explaining how I've constructed my SSIS package, but it's outside the scope of this post series to explain how to use SSIS. There are many resources on the web that can help you if needed.

Tuesday, 13 May 2014

Automating Sliding Partition Windows - Part 3

It's been a long while since I've worked on this - and believe me I want to as every month I'm spending too much time manually sliding my various partitions! Which is very prone to mistakes.

In the meantime I found a really good post describing an automated sliding partition window that achieves something a little different. It works on the following premise:

  • 7 day partitions that, at the end of each week, are merged into
  • 5 weekly partitions that are then merged into 
  • 6 monthly partitions that are finally merged into 
  • 1 large archive partition

I'm looking at implementing something like this in my company's OLAP database and combined with SQL Server 2012's columnstore indexes I am hoping to get some great performance improvement!

Here's the post:
http://blogs.msdn.com/b/menzos/archive/2008/06/30/table-partitioning-sliding-window-case.aspx


Tuesday, 4 March 2014

Modify Logshipping Restore Job

In my previous post I shared a stored procedure that dynamically creates and executes a restore log script in order to ease the recovery of logshipping after one or more files have been added to a logshipped database.

As I described in that post I was running into this problem on a monthly basis due to a sliding partition window scenario I maintain on several databases. After I wrote the stored procedure it was a lot easier to fix the problem after new files were added. But I still needed to wait for the logshipping restore job to fail and then intervene and execute the SP manually to recover from the failure. Being busy, not to mention forgetful, there were times when logshipping would be broken for several hours at a time.

Unfortunately, you cannot pro-actively inform the failover server that new files are on their way. But what finally did occur to me was to modify the job that restores the transaction log to include the stored procedure. In short, I added a second step that is run only if the first step fails. It works a treat.

Friday, 10 January 2014

An SP to Fix Logshipping After Adding a New Database File

UPDATE: 22 Jan 2014 - Fixed some bugs, added multi-file support and added a few notes

Every month I need to maintain the various sliding partition windows on several databases. If you've read some of my previous posts you may remember that this involves adding new files for the upcoming partitions. My company's databases are also logshipped as part of the disaster recovery strategy.

What invariably happens every month is I add new files make the necessary partition changes and then wonder why, a little later in the day, I'm receiving alerts for failed transaction log restores. The logshipped database obviously doesn't "know" about the new file and I end up with an error message like this:

Monday, 18 November 2013

Database Snapshot Stored Procedure

We've been trying to tighten up the database release process at work. This involves lots of big and small changes including things like using version control software better, documenting database code as well as putting together some default scripts that run with every release.

It occurred to me that even if you have a good process and set of check lists for the release cycle it is still quite easy to miss out on small but ultimately important steps. Things get hectic, last minute changes are made, etc, etc., and the next thing you know you've forgotten a basic, but important, step like creating a database snapshot or disabling scheduled jobs or revoking user access for the duration of the release. There are probably a dozen other small tasks that can be left out.

Saturday, 5 October 2013

Automating Sliding Partition Windows - Part 2

Sliding Partition Window Concept

Before we continue, let’s remind ourselves of what the Sliding Partition Window is and what it gets us. The whole point of the sliding window is to create a sort of data conveyor belt. New data comes into the production table and is placed in the empty partitions (and files where relevant) and the old data is “switched” or moved off to a staging table for further processing and, crucially, removal. In this way the table slides forward along its partitions but remains essentially the same size. There will always be a set number of partitions worth of data in your production table keeping it lean and fit. In addition, once you’ve moved the data from the staging tables to an archive or reporting database the physical files can be removed and your production database can be maintained at a manageable, near constant, size.

Tuesday, 24 September 2013

Automating Sliding Partition Windows - Part 1

Introduction

Table partitioning, as it was introduced in SQL Server 2005 Enterprise Edition, is an incredibly useful feature. It enhances the potential for performance tuning as well as data, well, err, partitioning - or to use another word data separation.

In terms of performance, partitioning a table and its indexes allows queries (assuming they are written to exploit the partitioning column) to target specific sets of data to reduce table or index scans and therefore improve (reduce) locking on large tables. If you add to this the ability to place a table’s individual partitions on separate filegroups, I/O contention can be reduced by separating the data further, potentially providing even greater performance gains.

Sunday, 18 August 2013

DMV Queries Tied to My Custom Index Stats Tables

I've mentioned a set of DMV queries that I use in a previous post on this blog. I've been chopping and changing the base set of queries for quite a while to suit my needs.

I have also written a series of posts describing how to perform index and statistics maintenance on a SQL Server's databases without losing the very important index stats that the queries mentioned above return.

So, in this post I will show you how I've modified a few of the index focused DMV queries to include the maintenance tables where I now store the cumulative index statistics.

Wednesday, 7 August 2013

Defragging Indexes Without Losing Index Stats - Part 5

The stored procedure

/*********************************************************************************
UPDATE: Bug fixes and increased FILLFACTOR control!
*********************************************************************************/
The first 4 posts of this series covered the concepts of recording index usage statistics for performance tuning while continuing to maintain a database’s tables and indexes. They also presented the individual SQL scripts to perform the various parts of that process.  The most recent post put all of those scripts together into a single set of queries that could be run on a single database.
Most of us, however, have more than one database on a SQL Server instance. Therefore, deploying that last script on a large number of databases is a bit messy. Any modifications will be difficult to manage.
In this post I will present a sample execute statement to run the stored procedure. It is designed to be run centrally, and will loop through all the databases in the instance (there is a parameter to include/exclude the system databases). I deployed the SP onto my Admin_DB where I also store the tables that keep the index usage statistics and table maintenance history.

Wednesday, 3 July 2013

A Script to Fix Orphaned Users

I find that one (well, actually, one of many) really annoying things of restoring databases to different servers is the orphaned user! I am notorious for forgetting about them. Here's a common scenario in my office:
Colleague: Hey DBA, can you refresh a copy of the database on the QA server
Me: Sure
Me (after restore finishes): Hey colleague, it's done.
Colleague: Hey DBA! I CAN'T LOG IN!
Me: *!*&^%$$£"!"£$%"£$
So, in order to help me with this, I've written a little script to make my life a little easier:

Hope this helps.