Monday, 1 February 2016

Restarting a service with PowerShell

We use Redgate's SQL Monitor to keep an eye on our mission critical servers. I like the product. It is reasonably user-friendly, reliable and it helps my team monitor our servers efficiently. I've used other products before and I think SQL Monitor compares favourably. No, I'm not paid by RedGate!!!

With that in mind, my team has been struggling a bit with an ongoing issue: every Monday morning one of the SQL Monitor processes was running amok and maxing out CPU! This was strange, as the product is generally pretty stable. After some investigation, we realised that the security team had started performing regular vulnerability scans over the weekends. Part of this test involved intentional failed login attempts. SQL Monitor can't seem to handle this, and the web server, specifically the "xsp4" process goes haywire and trashes the CPU.

Our small issue (according to management) didn't warrant a reworking of the new security procedures - fair enough, I guess. So, it was decided that we should restart the service that controls the xsp4 process, MonitorWebServiceNetwork, every Monday morning. And the best way to do this would be using a PowerShell script executed through Windows' Task Scheduler. I'm new to Powershell, so I had to scour the web to help me come up with the solution.

I knew that I didn't want to restart the service unnecessarily, so I needed a script that established if CPU usage for the particular process was high or not. I found this great post: Powershell: Get CPU Usage for a Process Using Get-Counter by Kris Powell. In it he shares the script below that outputs the CPU usage of a process (or processes as he uses a wildcard for the process name).


This script is great. The only problem for me is that it returns the CPU usage value as a string and I need a decimal value in order to compare it to my threshold value (50% CPU usage). Incidentally, I also didn't need to return the InstanceName. after much digging around, I modified the script resulting in the following:


Since I didn't need to print any of the counter values I completely removed the $Samples select statement. Instead I replaced the $Samples parameter with one called $cpu (just to make it more readable) and selected just the CookedValue property using the -ExpandProperty select parameter. This still returns a string value, so it is here that I convert it by defining the parameter as a decimal. Finally, I calculate the actual CPU usage by dividing the CookedValue by the number of CPU cores. I store this value in another parameter, $Usage. Kris Powell explains how he gets all this info in his post, so I won't repeat it.

Once I have this value I can use it in an if statement and restart the service if it surpasses my threshold:


At this point I've added one more parameter, $ServiceName. This is because the process that is maxing out CPU is controlled by a service and in my case, the names are different. So if "xsp4" is using too much CPU, I need to restart the "MonitorWebServiceNetwork" service. And to make it fully flexible I've added one last parameter, $threshold. The final script allows the parameters to be passed as arguments so you can run it as a script file:


The last hurdle I ran into was being able to execute this script from the task scheduler. This was resolved by the following blog post: Schedule PowerShell Scripts that Require Input Values by Ed Wilson, the Microsoft Scripting Guy. There's also a good explanation of scheduling PowerShell scripts here: How to Schedule a PowerShell Script by Dmitry Sotnikov. The Action in my scheduled task is defined as follows:

Action: Start a program
Program/script: PowerShell.exe
Add arguments (optional): -Command "& 'D:\Red Gate\CPU_Usage_Restart_Service.ps1' -ProcessName 'xsp4' -ServiceName 'MonitorWebServiceNetwork' -threshold 50"



Finally, I needed to schedule the task. Unfortunately, the times when the vulnerability scan would trigger the problem were variable. The vulnerability scan is run against a large estate of servers and there was no guarantee as to how long it would take each time. In the end I decided to schedule the task to run every 10 minutes over a several hour period on Monday mornings.

As always, I hope you've found this post helpful.

Monday, 7 December 2015

Databases With No Recent Backups

I've written the script below to easily identify databases that have not had backups taken within a certain amount of time. This could be useful if you need to build your own monitoring. However, even if you use a third party monitoring tool,as I do, it is helpful to run this query on the server for which I've received an alert.

Many of the servers I manage have a large number of databases. The alerts I receive for overdue backups are database specific. But, in general, if one database is overdue, chances are something has happened on the server. So this query can give me a quick overview.

At the top of the query there are three (3) variables:

  • DECLARE @LogAge INT --Hours
  • DECLARE @DiffAge INT --Days
  • DECLARE @FullAge INT --Days
Set these according to what you consider makes a database backup to be out of date. For example, if you take daily full backups you'll want to set @FullAge to -2 (or even -1). The query will then find any database whose latest backup was taken more than two days from the time of running the query. 

As always, I hope you find this helpful.

Wednesday, 25 November 2015

Script to Alter File Growth

I recently inherited a large estate of SQL Servers. Over the years, before I started, it grew organically and was managed by an insufficient number of accidental DBAs. As a result many SQL server instances were built and left with their default settings.

One such setting is file growth, which, tends to be set either at a growth rate of 10% or 1MB. Given the number of databases that can be on a single server I wrote a little script to aid me in making the changes.

I realise that every server and database has different requirements. I am using this script to get my databases and servers to a basic level of "best practice" and avoid any performance issues that may arise.

The script below sets the database file growth as follows:
  • Finds databases with files set to grow by either a percentage or 1MB or less
  • Calculates the file growth rate as follows:
    • 100MB for files < 1GB
    • 1024MB for files > 10GB
    • File growth value set to 10% of the file's size when file is between 1GB and 10GB (rounded to the nearest 100)
Run the script then copy and paste the results into another SSMS window. It is best to review the resulting scripts to be sure they will do what you expect them to.

As with all my scripts, this comes with no guarantees. Any scripts should be tested before put into a production environment.

Friday, 11 September 2015

Finding When A SQL Agent Job Ran

A quick post for today since I haven't added anything in quite a while.

Recently, I was investigating a deadlock that was occurring, like clockwork, on a particular server. A trace revealed that it was being executed by the SQL Server Agent account. The host machine that executed a query involved in the deadlock is our SSIS host. Therefore, there are loads of jobs with a dizzying array of schedules to weed through.

Friday, 24 July 2015

SQL Server Optimization

This post aims to aggregate a lot of the best practice or otherwise recommended configuration for SQL Server performance. It is meant as a summary or quick-reference sheet. All of these concepts are discussed at length all over the internet, so I will not spend much time explaining them. 

As with any recommendations of this sort please use them carefully and test before deploying into a production environment. Also, depending on your specific environment's configuration, not all of the below may be advisable or bring about clear benefits. To quote Jonathan Kehayias: "While there is a lot of good guidance available for how to best configure SQL Server, the specifics of any given implementation is a very big “It Depends…”What this post should achieve is providing the reader with an awareness of features and options that influence a SQL Server instance's operation. 

Tuesday, 26 May 2015

Add an Operator to All SQL Server Agent Alerts

I was investigating a new server I'd been assigned, looking for gaps in configuration using the trusty sp_Blitz tool. The organisation that owns this server instance is small and doesn't use third-party monitoring. I decided to add some basic alerting as recommended by Brent Ozar and Co. They've kindly provided a script to do this: Blitz Result: No SQL Server Agent Alerts Configured.

A little later on I realised that I needed to add an additional operator to these alerts and was loath to step through them one at a time and tick the appropriate box. So, I knocked out a quick and simple script (I've included two versions, one that adds the operator to all alerts and one that allows you to select a subset). I hope it is helpful.

Tuesday, 19 May 2015

Study Materials for MCSE Microsoft Exams 70-457, 70-458, 70-459

/***UPDATE***/
I passed the 70-457! Now studying for 70-458...
I passed the 70-458! Now studying for 70-459, booked for Jan 29, 2016...
/***UPDATE***/

I am studying for the MCSE Certification and will be taking the upgrade from MCITP 2008R8 tests 70-457, 70-458 and 70-459. After trawling the web for study materials I stumbled upon a series of posts on coffeeandsql.com by Carla Abanes. It is a great resource, touching on all the topics covered by the 70-457 exam. I've collated the links below:

Review Materials for 70-458

Exam 70-458 Transition Your MCTS on SQL Server 2008 to MCSA: SQL Server 2012

Review Materials for 70-457

70-457 Reviewer #01, Rank Functions
70-457 Reviewer #02, What is Columnstore Index?
70-457 Reviewer #03, Planning an Installation of SQL Server 2012
70-457 Reviewer #04, Implement a Migration Strategy
70-457 Reviewer #05, Configuring SQL Server 2012 Components
70-457 Reviewer #06, Configuring and Managing Database in SQL Server 2012
70-457 Reviewer #07, Resolving Concurrency Problems in SQL Server 2012
70-457 Reviewer #08, Implementing Clustered Instance in SQL Server 2012
70-457 Reviewer #09, Auditing SQL Server 2012
70-457 Reviewer #10 Managing SQL Server Agent
70-457 Reviewer #11 Configuring and Maintaining a Backup Strategy
70-457 Reviewer #12 Restoring SQL Server Databases
70-457 Reviewer #13 Managing SQL Server Logins
70-457 Reviewer #14 Database and Application Roles
70-457 Reviewer #15 Implement Database Mirroring
70-457 Reviewer #16 Implement AlwaysOn in SQL Server 2012
70-457 Reviewer #17 Database Replication in SQL Server 2012

Friday, 15 May 2015

SQL Login and User for Performance Tuning

I've run into an interesting dilemma: as a new staff member in a security conscious organisation, I can't really get any work done. There is, however, a lot of work a DBA can do in terms of analyzing SQL Server instances right from day one. There are lots of best practice recommendations that can be made that are application or db design agnostic. Alternately, it can be possible to investigate the database design without access to the underlying data itself. With that all in mind, I have put some scripts together that creates login/user permissions and roles that can give a SQL consultant useful, but security minded, access to a SQL instance.

Thursday, 14 May 2015

Database Backup Monitoring

Not everyone has access to off-the-shelf monitoring tools. One of my clients is a very small shop with only a few databases and 2 database servers. But that doesn't mean they don't need monitoring. Over the past few weeks I've been implementing various system alerts and now I've deployed the backup monitoring stored procedure below.

Thursday, 30 April 2015

Tracking Query Plan Changes

I was reading a blog post from Brent Ozar (SQL Server Query Store), which detailed Microsoft's announced future feature in SQL Server that aims to store a history of cached execution plans. The main purpose, it seems, of such a feature is to aid in performance troubleshooting and tuning. If something has changed recently and a query or store procedure begins behaving badly, there will be another tool in the DBA toolbelt to aid in finding what happened. Beyond that I'll let others explain it further.