Showing posts with label sql server 2005. Show all posts
Showing posts with label sql server 2005. Show all posts

Friday, 19 January 2018

Querying the Execution Plan XML for Subtree Cost

There are quite a lot of great tools and queries available to the DBA that aid in troubleshooting and performance tuning. The list is incredibly long, but I mainly use RedGate's SQL Monitor (which has come on in the last 5 years in leaps and bounds), Glenn Berry's DMV queriesAdam Machanic's sp_whoisactive and Brent Ozar Unlimited's sp_Blitz suite (predominantly sp_Blitz and sp_BlitzIndex). In writing this, I see that Brent Ozar has expanded his offering, so it looks like I have a weekend project ahead of me!

Friday, 28 July 2017

Cleanup SQL Server Logins

Cleaning up after someone leaves. We all do it right? Our companies all have water tight policies and procedures to handle that, or maybe they have policies but not procedures. Or maybe they just kind of don't.

Well, if your company is anything like most of the places I've worked, it's somewhere in the middle. And from a DBA perspective, especially on non-production environments, your list of logins and users becomes a bit long and messy after a while.

Monday, 7 December 2015

Databases With No Recent Backups

I've written the script below to easily identify databases that have not had backups taken within a certain amount of time. This could be useful if you need to build your own monitoring. However, even if you use a third party monitoring tool,as I do, it is helpful to run this query on the server for which I've received an alert.

Many of the servers I manage have a large number of databases. The alerts I receive for overdue backups are database specific. But, in general, if one database is overdue, chances are something has happened on the server. So this query can give me a quick overview.

Friday, 11 September 2015

Finding When A SQL Agent Job Ran

A quick post for today since I haven't added anything in quite a while.

Recently, I was investigating a deadlock that was occurring, like clockwork, on a particular server. A trace revealed that it was being executed by the SQL Server Agent account. The host machine that executed a query involved in the deadlock is our SSIS host. Therefore, there are loads of jobs with a dizzying array of schedules to weed through.

Friday, 24 July 2015

SQL Server Optimization

This post aims to aggregate a lot of the best practice or otherwise recommended configuration for SQL Server performance. It is meant as a summary or quick-reference sheet. All of these concepts are discussed at length all over the internet, so I will not spend much time explaining them. 

As with any recommendations of this sort please use them carefully and test before deploying into a production environment. Also, depending on your specific environment's configuration, not all of the below may be advisable or bring about clear benefits. To quote Jonathan Kehayias: "While there is a lot of good guidance available for how to best configure SQL Server, the specifics of any given implementation is a very big “It Depends…”What this post should achieve is providing the reader with an awareness of features and options that influence a SQL Server instance's operation. 

Friday, 15 May 2015

SQL Login and User for Performance Tuning

I've run into an interesting dilemma: as a new staff member in a security conscious organisation, I can't really get any work done. There is, however, a lot of work a DBA can do in terms of analyzing SQL Server instances right from day one. There are lots of best practice recommendations that can be made that are application or db design agnostic. Alternately, it can be possible to investigate the database design without access to the underlying data itself. With that all in mind, I have put some scripts together that creates login/user permissions and roles that can give a SQL consultant useful, but security minded, access to a SQL instance.

Sunday, 21 December 2014

Emergency Access to SQL Server

There are times when you need to gain access to SQL Server in an emergency. It doesn't happen very often, but it does happen. Actually, part of the problem is that it happens so rarely! We all get to grips with the various back-door commands required of us DBAs when we're training or taking a certification course. But after six months, even two or more years, it all gets a bit rusty.

Therefore, I decided to put together an emergency cheat-sheet. All of these processes are very well documented around the web. My purpose here is purely to list the necessary steps to get access to your server quickly. So I won't be explaining how things work. I will list different options of getting the various steps done.

Wednesday, 7 August 2013

Defragging Indexes Without Losing Index Stats - Part 5

The stored procedure

/*********************************************************************************
UPDATE: Bug fixes and increased FILLFACTOR control!
*********************************************************************************/
The first 4 posts of this series covered the concepts of recording index usage statistics for performance tuning while continuing to maintain a database’s tables and indexes. They also presented the individual SQL scripts to perform the various parts of that process.  The most recent post put all of those scripts together into a single set of queries that could be run on a single database.
Most of us, however, have more than one database on a SQL Server instance. Therefore, deploying that last script on a large number of databases is a bit messy. Any modifications will be difficult to manage.
In this post I will present a sample execute statement to run the stored procedure. It is designed to be run centrally, and will loop through all the databases in the instance (there is a parameter to include/exclude the system databases). I deployed the SP onto my Admin_DB where I also store the tables that keep the index usage statistics and table maintenance history.

Tuesday, 4 June 2013

Defragging Indexes Without Losing Index Stats - Part 4

Generate UPDATE STATISTICS scripts

In the previous post of this series I provided the script that both determines the indexes that need attention and builds the individual ALTER INDEX scripts. So, now that the indexes have been tended to it’s time to do the same for table and index statistics. 

The update statements created are based on the defrag type carried out on the table as well as a few thresholds to determine whether it is necessary to rebuild the statistics. Those thresholds are:
- (Time passed since last update: >3 days
- And Minimum row count of the table: >=1000 rows
- And Percentage change in number of rows: >20% more or less)
- Or Time passed since last update: >30 days

Script out the indexes in a database

I'm just about to begin a big performance improvement project. I'll be starting with an evaluation of indexes. This will involve synchronizing a development environment's indexes with those that are in live. I need to, however, keep a record of the existing indexes on that dev environment.

So I've written the code below. It will return a table with the object_id, table name, index name, index_id and a create index script. This script includes the drop and create statements for both primary keys and unique constraints. There are quite a few good scripts out in the wider web world, but I needed a few specific things so I just wrote my own.

Thursday, 23 May 2013

Defragging Indexes Without Losing Index Stats - Part 3

In Part 1 and Part 2 of this series I discussed synchronizing index maintenance and index usage statistics logging as an important part of a DBA's performance tuning routine and the script for saving a database's index usage statistics. This part of the series will go into the query that both identifies the indexes that are fragmented and dynamically generates the defrag scripts.

Wednesday, 22 May 2013

Defragging Indexes Without Losing Index Stats - Part 2

In my last post, I introduced the concept of synchronizing index maintenance and index usage statistics logging as an important part of a DBA's performance tuning routine. In this post I will start delving into the individual scripts involved.

UPDATE: The full script (see link at end of post) has been both updated and tested (a bit)
                  I've also augmented the script below to clean up the atblIndexUsageStats table.

As a recap, below are the main steps involved:
  1. Log Index usage stats
  2. Determine which indexes need defragging and generate scripts
  3. Generate update statistics scripts based on step 2
  4. Log defrag operation to maintenance history table
  5. Execute defrag and update stats scripts
  6. Cleanup logging tables (optional)

Sunday, 12 May 2013

Defragging Indexes Without Losing Index Stats - Part 1

Introduction

As part of my regular performance tuning and maintenance schedule I rely heavily on queries that interrogate the DMV sys.dm_db_index_usage_stats. Based on this DMV I can judge whether indexes should be removed, modified or left as they are. See my previous blog post on SQL Server DMVs. I also believe in the regular defragging of indexes in a database (although I recently read a post from Brent Ozar challenging this idea, but if your servers don’t have the RAM required for such caching as his article suggests and/or you'll never get the budget to upgrade, you’ll need to keep reading). 

Wednesday, 8 May 2013

Answer to the Partitioned Index Puzzle

A few weeks ago I posted a description of an error I encountered when creating a partitioned index on a SQL Server 2005 Standard Edition instance. It had me quite literally tearing my hair out. The error was a typically vague Microsoft one:

Msg 4436, Level 16, State 12, Line 1
UNION ALL view 'PartWindow.dbo.vwPartitionedView' is not updatable because a partitioning column was not found.

I spent quite a few hours staring at my SQL statements and re-reading documentation before the solution hit me.

If you haven't had a chance to read the post, please do before you skip down to the answer. It was, unfortunately, a total "duh!" moment. But at the same time quite easy to overlook.

Sunday, 21 April 2013

A Quick Partitioned View Puzzle

I needed to create a few partitions on a legacy SQL Server 2005 Standard Edition instance. As partitions are not supported on standard edition, I needed to go the partitioned index route. In my research into how to  implement them, I found some really good resources:
I'm not going to explain the basics of partitioning here, so you may want to read the above pages if you're not familiar with it before continuing. 

Tuesday, 16 April 2013

ISNULL vs. COALESCE

A friend of mine just asked me when COALESCE should be used. As I'm a DBA and don't do too much SQL development work, I never had a need to use COALESCE (or I never thought I had a need). Therefore, I was inspired to do a little digging. As there has been much written about the similarities and differences I will use this post as a jumping off point to a few good resources that shed light on the topic. Though, I'll highlight some of the points here as well.

Thursday, 4 April 2013

Last SQL Server Instance Start Date

I was interested in finding when a new SQL Server instance I started working on was last restarted and came across this blog post.

Although the information was very good I felt the queries could be tweaked slightly and broken out into the variations needed for the different SQL Server editions.

I find that it is helpful to know when the database server was started due to the various Dynamic Management Views and Functions (DMV) that I use. Many of these collate cumulative statistics beginning at a SQL Server start. Which means that they are cleared down when the instance is restarted, so it's helpful to know the uptime of the server in order to know how much data has been collected and whether it is representative of how the server is being used: i.e. one day's worth of statistics vs. one month's.

As the blog post mentioned above outlines, there are several ways of getting to this data and some of them work only in certain versions of SQL Server. Here they are in my modified form:
NB: I am ignoring SQL Server 2000

Method 1 - Check TempDB create date

SQL Server 2005+ 
SELECT [crdate] AS [LastStartupDate],
DATEDIFF(dd,[crdate],GETDATE()) AS [Days since server start]
FROM   [dbo].[sysdatabases]
WHERE  [name] = 'tempdb'
AND [crdate] < GETDATE() - 0 --filter results to instances restarted greater than X days
Method 2 - Check the system DMV
SQL Server 2008+
SELECT [sqlserver_start_time] AS [LastStartupDate],
DATEDIFF(dd, [sqlserver_start_time],GETDATE()) AS [Days since server start]
FROM   [sys].[dm_os_sys_info]

Method 3 - Interrogate the SQL Server error log
SQL Server 2005 - 2008
DECLARE @LogNum TINYINT
SET @LogNum = 0 --0=current 1=.1 etc
DECLARE @LogType TINYINT
SET @LogType = 1 --1=SQL 2=Agent
DECLARE @ProcessType VARCHAR(64)
SET @ProcessType = 'Server'
DECLARE @TextSearch VARCHAR(20)
SET @TextSearch = 'Server process ID is'
DECLARE @ErrLog AS TABLE([LogDate] DATETIME, [ProcessInfo] VARCHAR(64), [TEXT] VARCHAR(MAX))
INSERT INTO @ErrLog
EXEC sys.xp_readerrorlog @LogNum, @LogType , @ProcessType, @TextSearch

--grab the first occurrence and report back the timestamp as the last startup
SELECT MIN(LogDate) AS [LastStartupDate],
DATEDIFF(dd, MIN(LogDate),GETDATE()) AS [Days since server start]
FROM @ErrLog
WHERE LogDate < GETDATE() - 0 --filter results to instances restarted greater than X days
SQL Server 2008 - 2012
In SQL Server 2012 the extended stored procedure accepts only two parameters 
DECLARE @LogNum TINYINT = 0 --0=current 1=.1 etc
DECLARE @LogType TINYINT = 1 --1=SQL 2=Agent
DECLARE @ProcessType VARCHAR(64) = 'Server'
DECLARE @TextSearch VARCHAR(20) = 'Server process ID is'
DECLARE @ErrLog AS TABLE([LogDate] DATETIME, [ProcessInfo] VARCHAR(64), [TEXT] VARCHAR(MAX))
INSERT INTO @ErrLog
EXEC sys.xp_readerrorlog @LogNum, @LogType --, @ProcessType, @TextSearch

--grab the first occurrence and report back the timestamp as the last startup
SELECT MIN(LogDate) AS [LastStartupDate],
DATEDIFF(dd, MIN(LogDate),GETDATE()) AS [Days since server start]
FROM @ErrLog
WHERE [ProcessInfo] = @ProcessType AND [TEXT] LIKE '%' + @TextSearch + '%'
AND LogDate < GETDATE() - 0 --filter results to instances restarted greater than X days

Monday, 18 March 2013

SQL Server 2005+ Performance Tuning - Part 2

This series of posts aims to provide an introduction to resolving big and sudden performance degradation.
In my last post I covered the reasons for poor performance and the ways of pinpointing performance bottlenecks.
In this post I will detail some of the ways of resolving those performance bottlenecks.

Thursday, 14 March 2013

SQL Server 2005+ Performance Tuning - Part 1

This next series of posts aims to provide an introduction to resolving big and sudden performance degradation. I will also add a few tips for general performance related database best practice and tidying.




Tuesday, 12 March 2013

Remove a Cursor From a Stored Procedure - Part 2

Why is it so slow?!?!?
In my last post, I used a stored procedure from a database I manage to illustrate a real world example of converting a cursor into set based SQL. In this post I will look into a the mechanics behind the scenes and explain at least one of the reasons why it is generally unwise to use cursors when a set based query will provide the same results.