We've been trying to tighten up the database release process at work. This involves lots of big and small changes including things like using version control software better, documenting database code as well as putting together some default scripts that run with every release.
It occurred to me that even if you have a good process and set of check lists for the release cycle it is still quite easy to miss out on small but ultimately important steps. Things get hectic, last minute changes are made, etc, etc., and the next thing you know you've forgotten a basic, but important, step like creating a database snapshot or disabling scheduled jobs or revoking user access for the duration of the release. There are probably a dozen other small tasks that can be left out.
I'm keeping this blog as a repository for SQL tips and tricks I have learned over my time working with SQL Server.
Monday, 18 November 2013
Saturday, 5 October 2013
Automating Sliding Partition Windows - Part 2
Sliding Partition Window Concept
Before we continue, let’s remind ourselves of what the
Sliding Partition Window is and what it gets us. The whole point of the sliding
window is to create a sort of data conveyor belt. New data comes into the
production table and is placed in the empty partitions (and files where
relevant) and the old data is “switched” or moved off to a staging table for
further processing and, crucially, removal. In this way the table slides
forward along its partitions but remains essentially the same size. There will
always be a set number of partitions worth of data in your production table
keeping it lean and fit. In addition, once you’ve moved the data from the
staging tables to an archive or reporting database the physical files can be
removed and your production database can be maintained at a manageable, near
constant, size.
Tuesday, 24 September 2013
Automating Sliding Partition Windows - Part 1
Introduction
Table partitioning, as it was introduced in SQL Server 2005 Enterprise Edition, is an incredibly useful feature. It enhances the potential for performance tuning as well as data, well, err, partitioning - or to use another word data separation.In terms of performance, partitioning a table and its indexes allows queries (assuming they are written to exploit the partitioning column) to target specific sets of data to reduce table or index scans and therefore improve (reduce) locking on large tables. If you add to this the ability to place a table’s individual partitions on separate filegroups, I/O contention can be reduced by separating the data further, potentially providing even greater performance gains.
Sunday, 18 August 2013
DMV Queries Tied to My Custom Index Stats Tables
I've mentioned a set of DMV queries that I use in a previous post on this blog. I've been chopping and changing the base set of queries for quite a while to suit my needs.
I have also written a series of posts describing how to perform index and statistics maintenance on a SQL Server's databases without losing the very important index stats that the queries mentioned above return.
So, in this post I will show you how I've modified a few of the index focused DMV queries to include the maintenance tables where I now store the cumulative index statistics.
I have also written a series of posts describing how to perform index and statistics maintenance on a SQL Server's databases without losing the very important index stats that the queries mentioned above return.
So, in this post I will show you how I've modified a few of the index focused DMV queries to include the maintenance tables where I now store the cumulative index statistics.
Wednesday, 7 August 2013
Defragging Indexes Without Losing Index Stats - Part 5
The stored procedure
/*********************************************************************************
UPDATE: Bug fixes and increased FILLFACTOR control!
*********************************************************************************/
The first 4 posts of this series covered the concepts of recording index usage statistics for performance tuning while continuing to maintain a database’s tables and indexes. They also presented the individual SQL scripts to perform the various parts of that process. The most recent post put all of those scripts together into a single set of queries that could be run on a single database.
UPDATE: Bug fixes and increased FILLFACTOR control!
*********************************************************************************/
The first 4 posts of this series covered the concepts of recording index usage statistics for performance tuning while continuing to maintain a database’s tables and indexes. They also presented the individual SQL scripts to perform the various parts of that process. The most recent post put all of those scripts together into a single set of queries that could be run on a single database.
Most of us, however, have more than one database on a SQL
Server instance. Therefore, deploying that last script on a large number of
databases is a bit messy. Any modifications will be difficult to manage.
In this post I will present a sample execute statement to
run the stored procedure. It is designed to be run centrally, and will loop
through all the databases in the instance (there is a parameter to
include/exclude the system databases). I deployed the SP onto my Admin_DB where
I also store the tables that keep the index usage statistics and table
maintenance history.
Wednesday, 3 July 2013
A Script to Fix Orphaned Users
I find that one (well, actually, one of many) really annoying things of restoring databases to different servers is the orphaned user! I am notorious for forgetting about them. Here's a common scenario in my office:
Hope this helps.
Colleague: Hey DBA, can you refresh a copy of the database on the QA serverSo, in order to help me with this, I've written a little script to make my life a little easier:
Me: Sure
Me (after restore finishes): Hey colleague, it's done.
Colleague: Hey DBA! I CAN'T LOG IN!
Me: *!*&^%$$£"!"£$%"£$
DECLARE @loop int = 1 DECLARE @username sysname DECLARE @orphanedusers TABLE ( id int identity(1, 1) , UserName sysname , UserSID varchar(36) ) INSERT INTO @orphanedusers ( [UserName], [UserSID] ) EXEC sp_change_users_login 'report' WHILE @loop <= ( SELECT MAX(id) FROM @orphanedusers ) BEGIN SET @username = ( SELECT UserName FROM @orphanedusers WHERE id = @loop ) IF @username IN ( SELECT name FROM master.sys.[syslogins] ) BEGIN EXEC sp_change_users_login 'auto_fix', @username END SET @loop = @loop + 1 END
Hope this helps.
Tuesday, 4 June 2013
Defragging Indexes Without Losing Index Stats - Part 4
Generate UPDATE STATISTICS scripts
In the previous post of this series I provided the script that both determines the indexes that need attention and builds the individual ALTER INDEX scripts. So, now that the indexes have been tended to it’s time to do the same for table and index statistics.The update statements created are based on the defrag type carried out on the table as well as a few thresholds to determine whether it is necessary to rebuild the statistics. Those thresholds are:
- (Time passed since last update: >3 days
- And Minimum row count of the table: >=1000 rows
- And Percentage change in number of rows: >20% more or less)
- Or Time passed since last update: >30 days
Script out the indexes in a database
I'm just about to begin a big performance improvement project. I'll be starting with an evaluation of indexes. This will involve synchronizing a development environment's indexes with those that are in live. I need to, however, keep a record of the existing indexes on that dev environment.
So I've written the code below. It will return a table with the object_id, table name, index name, index_id and a create index script. This script includes the drop and create statements for both primary keys and unique constraints. There are quite a few good scripts out in the wider web world, but I needed a few specific things so I just wrote my own.
So I've written the code below. It will return a table with the object_id, table name, index name, index_id and a create index script. This script includes the drop and create statements for both primary keys and unique constraints. There are quite a few good scripts out in the wider web world, but I needed a few specific things so I just wrote my own.
Thursday, 23 May 2013
Defragging Indexes Without Losing Index Stats - Part 3
In Part 1 and Part 2 of this series I discussed synchronizing index maintenance and index usage statistics logging as an important part of a DBA's performance tuning routine and the script for saving a database's index usage statistics. This part of the series will go into the query that both identifies the indexes that are fragmented and dynamically generates the defrag scripts.
Wednesday, 22 May 2013
Defragging Indexes Without Losing Index Stats - Part 2
In my last post, I introduced the concept of synchronizing index maintenance and index usage statistics logging as an important part of a DBA's performance tuning routine. In this post I will start delving into the individual scripts involved.
UPDATE: The full script (see link at end of post) has been both updated and tested (a bit)
I've also augmented the script below to clean up the atblIndexUsageStats table.
UPDATE: The full script (see link at end of post) has been both updated and tested (a bit)
I've also augmented the script below to clean up the atblIndexUsageStats table.
As a recap, below are the main steps involved:
- Log Index usage stats
- Determine which indexes need defragging and generate scripts
- Generate update statistics scripts based on step 2
- Log defrag operation to maintenance history table
- Execute defrag and update stats scripts
- Cleanup logging tables (optional)
Sunday, 12 May 2013
Defragging Indexes Without Losing Index Stats - Part 1
Introduction
As part of my regular
performance tuning and maintenance schedule I rely heavily on queries that
interrogate the DMV sys.dm_db_index_usage_stats. Based on this DMV I can judge
whether indexes should be removed, modified or left as they are. See my previous blog post on SQL Server DMVs. I also believe in the regular defragging
of indexes in a database (although I recently read a post from Brent Ozar challenging
this idea, but if your servers don’t have the RAM required for such caching as
his article suggests and/or you'll never get the budget to upgrade, you’ll need to keep reading).
Wednesday, 8 May 2013
Answer to the Partitioned Index Puzzle
A few weeks ago I posted a description of an error I encountered when creating a partitioned index on a SQL Server 2005 Standard Edition instance. It had me quite literally tearing my hair out. The error was a typically vague Microsoft one:
I spent quite a few hours staring at my SQL statements and re-reading documentation before the solution hit me.
If you haven't had a chance to read the post, please do before you skip down to the answer. It was, unfortunately, a total "duh!" moment. But at the same time quite easy to overlook.
Msg 4436, Level 16, State 12, Line 1
UNION ALL view 'PartWindow.dbo.vwPartitionedView' is not updatable because a partitioning column was not found.
I spent quite a few hours staring at my SQL statements and re-reading documentation before the solution hit me.
If you haven't had a chance to read the post, please do before you skip down to the answer. It was, unfortunately, a total "duh!" moment. But at the same time quite easy to overlook.
Sunday, 21 April 2013
A Quick Partitioned View Puzzle
I needed to create a few partitions on a legacy SQL Server 2005 Standard Edition instance. As partitions are not supported on standard edition, I needed to go the partitioned index route. In my research into how to implement them, I found some really good resources:
I'm not going to explain the basics of partitioning here, so you may want to read the above pages if you're not familiar with it before continuing.
Tuesday, 16 April 2013
ISNULL vs. COALESCE
A friend of mine just asked me when COALESCE should be used. As I'm a DBA and don't do too much SQL development work, I never had a need to use COALESCE (or I never thought I had a need). Therefore, I was inspired to do a little digging. As there has been much written about the similarities and differences I will use this post as a jumping off point to a few good resources that shed light on the topic. Though, I'll highlight some of the points here as well.
Thursday, 4 April 2013
Last SQL Server Instance Start Date
I was interested in finding when a new SQL Server instance I started working on was last restarted and came across this blog post.
Although the information was very good I felt the queries could be tweaked slightly and broken out into the variations needed for the different SQL Server editions.
I find that it is helpful to know when the database server was started due to the various Dynamic Management Views and Functions (DMV) that I use. Many of these collate cumulative statistics beginning at a SQL Server start. Which means that they are cleared down when the instance is restarted, so it's helpful to know the uptime of the server in order to know how much data has been collected and whether it is representative of how the server is being used: i.e. one day's worth of statistics vs. one month's.
As the blog post mentioned above outlines, there are several ways of getting to this data and some of them work only in certain versions of SQL Server. Here they are in my modified form:
NB: I am ignoring SQL Server 2000
Method 1 - Check TempDB create date
SQL Server 2005+
SQL Server 2008+
SQL Server 2005 - 2008
In SQL Server 2012 the extended stored procedure accepts only two parameters
Although the information was very good I felt the queries could be tweaked slightly and broken out into the variations needed for the different SQL Server editions.
I find that it is helpful to know when the database server was started due to the various Dynamic Management Views and Functions (DMV) that I use. Many of these collate cumulative statistics beginning at a SQL Server start. Which means that they are cleared down when the instance is restarted, so it's helpful to know the uptime of the server in order to know how much data has been collected and whether it is representative of how the server is being used: i.e. one day's worth of statistics vs. one month's.
As the blog post mentioned above outlines, there are several ways of getting to this data and some of them work only in certain versions of SQL Server. Here they are in my modified form:
NB: I am ignoring SQL Server 2000
Method 1 - Check TempDB create date
SQL Server 2005+
Method 2 - Check the system DMVSELECT [crdate] AS [LastStartupDate],DATEDIFF(dd,[crdate],GETDATE()) AS [Days since server start]FROM [dbo].[sysdatabases]WHERE [name] = 'tempdb'AND [crdate] < GETDATE() - 0 --filter results to instances restarted greater than X days
SQL Server 2008+
Method 3 - Interrogate the SQL Server error logSELECT [sqlserver_start_time] AS [LastStartupDate],DATEDIFF(dd, [sqlserver_start_time],GETDATE()) AS [Days since server start]FROM [sys].[dm_os_sys_info]
SQL Server 2005 - 2008
SQL Server 2008 - 2012DECLARE @LogNum TINYINTSET @LogNum = 0 --0=current 1=.1 etcDECLARE @LogType TINYINTSET @LogType = 1 --1=SQL 2=AgentDECLARE @ProcessType VARCHAR(64)SET @ProcessType = 'Server'DECLARE @TextSearch VARCHAR(20)SET @TextSearch = 'Server process ID is'DECLARE @ErrLog AS TABLE([LogDate] DATETIME, [ProcessInfo] VARCHAR(64), [TEXT] VARCHAR(MAX))INSERT INTO @ErrLogEXEC sys.xp_readerrorlog @LogNum, @LogType , @ProcessType, @TextSearch--grab the first occurrence and report back the timestamp as the last startupSELECT MIN(LogDate) AS [LastStartupDate],DATEDIFF(dd, MIN(LogDate),GETDATE()) AS [Days since server start]FROM @ErrLogWHERE LogDate < GETDATE() - 0 --filter results to instances restarted greater than X days
In SQL Server 2012 the extended stored procedure accepts only two parameters
DECLARE @LogNum TINYINT = 0 --0=current 1=.1 etcDECLARE @LogType TINYINT = 1 --1=SQL 2=AgentDECLARE @ProcessType VARCHAR(64) = 'Server'DECLARE @TextSearch VARCHAR(20) = 'Server process ID is'DECLARE @ErrLog AS TABLE([LogDate] DATETIME, [ProcessInfo] VARCHAR(64), [TEXT] VARCHAR(MAX))INSERT INTO @ErrLogEXEC sys.xp_readerrorlog @LogNum, @LogType --, @ProcessType, @TextSearch--grab the first occurrence and report back the timestamp as the last startupSELECT MIN(LogDate) AS [LastStartupDate],DATEDIFF(dd, MIN(LogDate),GETDATE()) AS [Days since server start]FROM @ErrLogWHERE [ProcessInfo] = @ProcessType AND [TEXT] LIKE '%' + @TextSearch + '%'AND LogDate < GETDATE() - 0 --filter results to instances restarted greater than X days
Monday, 18 March 2013
SQL Server 2005+ Performance Tuning - Part 2
This series of posts aims to provide an introduction to resolving
big and sudden performance degradation.
In my last post I covered the reasons for poor performance and the ways of pinpointing performance bottlenecks.
In this post I will detail some of the ways of resolving those performance bottlenecks.
In my last post I covered the reasons for poor performance and the ways of pinpointing performance bottlenecks.
In this post I will detail some of the ways of resolving those performance bottlenecks.
Thursday, 14 March 2013
SQL Server 2005+ Performance Tuning - Part 1
This next series of posts aims to provide an introduction to resolving
big and sudden performance degradation. I will also add a few tips for general performance related database best practice and tidying.
Tuesday, 12 March 2013
Remove a Cursor From a Stored Procedure - Part 2
Why is it so slow?!?!? |
Thursday, 7 March 2013
Remove a Cursor From a Stored Procedure - Part 1
In my organisation, and no doubt many others, the problem stems from the fact that the developers writing SQL are not SQL developers. Rather they are [insert relevant coding language] developers. And in many programming languages processing is best performed on a row-by-row basis. Not so SQL, especially T-SQL.
Sunday, 24 February 2013
Removing a Merge Replication Publication
Merge replication in any SQL Server edition is notoriously bad tempered. It does seem to be getting more stable and "intelligent" as time goes on but problems still rear their ugly heads on occasion.
Monday, 28 January 2013
DMV Starter Pack
A while back I discovered the DMV Starter Pack, a really useful set of queries exploiting the very powerful Dynamic Management Views and Functions in SQL Server 2005 and 2008.
What I've done is add a few bits to the queries I use most often. Mostly the index queries. In the missing and bad indexes queries I added CREATE and DROP statement columns. Obviously, the CREATE statements conform to the naming conventions of my company [IX_tablename_column1name_column2name...], but this is quite easily modified. I also use a fillfactor of 90 by default. I hope this is helpful.
What I've done is add a few bits to the queries I use most often. Mostly the index queries. In the missing and bad indexes queries I added CREATE and DROP statement columns. Obviously, the CREATE statements conform to the naming conventions of my company [IX_tablename_column1name_column2name...], but this is quite easily modified. I also use a fillfactor of 90 by default. I hope this is helpful.
Subscribe to:
Posts (Atom)