Friday, 24 July 2015

SQL Server Optimization

This post aims to aggregate a lot of the best practice or otherwise recommended configuration for SQL Server performance. It is meant as a summary or quick-reference sheet. All of these concepts are discussed at length all over the internet, so I will not spend much time explaining them. 

As with any recommendations of this sort please use them carefully and test before deploying into a production environment. Also, depending on your specific environment's configuration, not all of the below may be advisable or bring about clear benefits. To quote Jonathan Kehayias: "While there is a lot of good guidance available for how to best configure SQL Server, the specifics of any given implementation is a very big “It Depends…”What this post should achieve is providing the reader with an awareness of features and options that influence a SQL Server instance's operation. 

OS Level Settings

Instant File Initialization (quick win)

Ensure servers are set up to allow SQL Server to initialize data files instantaneously. From MSDN:
In SQL Server, data files can be initialized instantaneously. This allows for fast execution of the previously mentioned file operations. Instant file initialization reclaims used disk space without filling that space with zeros. Instead, disk content is overwritten as new data is written to the files. Log files cannot be initialized instantaneously.Instant file initialization is available only on Microsoft Windows XP Professional or Windows Server 2003 or later versions.Instant file initialization is only available if the SQL Server (MSSQLSERVER) service account has been granted SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to other users by adding them to the Perform Volume Maintenance Tasks security policy. For more information about assigning user rights, see the Windows documentation.CAVEAT: Instant file initialization is not available when TDE is enabled.
As the above paragraph mentions, Log files are not affected. In some cases, security policies may be in place that prevent assigning the SQL Server service with these privileges.

File Indexing (quick win)

Disable Windows file indexing on all drives.

SQL Server Level Settings

Max Memory 

Limiting the maximum memory that SQL Server will use is important. Due to SQL Server's method of claiming RAM and not releasing it (at least until a restart), you want to be sure that in the event that SQL Server experiences heavy load it never steals memory from the OS and any other critical components preventing them from functioning properly. Jonathan Kehayias explains it all very well.

Trace Flags (quick win)

DBCC TRACEON(1118,-1) – Improves page allocation efficiency for TempDB
DBCC TRACEON (1117,-1) – Forces all files in a filegroup to grow in synch
DBCC TRACEON(3226,-1) – Prevents successful backup operations from being logged

The above mentioned trace flags should be added to the startup parameters of the SQL server – or to a stored proc that is marked as ‘on startup’.
Trace Flag 1117 needs to be used carefully. It is a server/instance wide setting so all databases on an instance will be affected. Additionally, if a database uses table partitioning over multiple filegroups, this trace flag will cause synchronized file growth on all files. This could cause unused/archived partitions to grow in line with new/active partitions.

Max Degree of Parallelism (quick win)

This setting needs to be evaluated on a server-by-server basis depending on the server CPU configuration. This setting comes into its own when a server has a larger number of CPU cores, i.e. 8 or more. The reason for setting a limit is to prevent any single parallel process/query from using all the cores thereby preventing others from running. 
From Microsoft Support website:
Use the following guidelines when you configure the MAXDOP value for SQL Server 2005 and later versions:

Server with single NUMA node
Less than 8 logical processors
Keep MAXDOP at or below # of logical processors
Server with single NUMA node
Greater than 8 logical processors
Keep MAXDOP at 8
Server with multiple NUMA nodes
Less than 8 logical processors per NUMA node
Keep MAXDOP at or below # of logical processors per NUMA node
Server with multiple NUMA nodes
Greater than 8 logical processors per NUMA node
Keep MAXDOP at 8

The above table is purely a guideline. MAXDOP settings are environment and workload specific. For example, on servers with 8 or fewer logical cores, you may as well leave MAXDOP at 0, since the recommended setting is to keep MAXDOP at or below the # of logical cores. And as far as limiting a parallel process's access to all the cores, this is valid in a busy OLTP style environment. But if it is a data warehouse that runs non-parallel ETL procedures, why restrict them to only a few cores?

Cost threshold for parallelism (quick win)

This setting should be reviewed for each server. It sets the threshold for a query’s estimated cost before it is considered for parallel execution (using multiple cores in parallel for a query)

Optimize for Ad-hoc queries setting (quick win)

Evaluate servers for plan cache usage. If there are many queries cached that have been run only once it may be a good idea to enable this setting. The query below may help in making this evaluation:

Database Level Settings

Forced Parameterization (quick win)

Many applications pass their SQL queries as literal strings to the server. This can cause bloat of the plan cache, which can be bad both in server memory management an the efficiency of the query optimizer in choosing appropriate plans. You can change this setting in the database options. Brent Ozar explains it well.

Database Configuration/Design


The ideal configuration for Tempdb is as follows:
  • Data files are placed on their own disk array (RAID 10 equivalent if possible)
  • The log file is placed on its own disk array (RAID 1 is sufficient)
  • Use multiple data files of the same size based on the number of logical cores.
  • Pre-grow the data and log files, i.e. set the max size of the files at time of creation to prevent auto-growth
Tempdb is a complex topic. the above suggestions are a good start. Here are some good resources on how it works and how best to configure it:


Spread databases (especially large ones) over multiple Filegroups. The advantages of this are several fold, the main two are:
  1. Performance – as I/O can be spread over multiple files. In addition to that, based on how many logical processors you have and the IO profile of the tables in the filegroup, consider configuring filegroups with more than 1 file.
  2. RTO improvement – if the tables are distributed over file groups based on their usage RTO can be improved by being able to restore the crucial tables first to get the users/application up and running before the entire database is restored. 
As an example: Database X has the following table/filegroup configuration:
  • PRIMARY filegroup:
    • System/Merge tables on PRIMARY filegroup (this is not definable by the user)
  • FileGroup1
    • Tables crucial for application function (i.e. logging in, UI, data entry, etc.)
  • Filegroup2, Filegroup3,FilegroupN
    • Tables used for reporting/data archive (i.e. not needed for basic application functions)
In this scenario, if the database needs to be restored users can log in to the application after restoring only the PRIMARY and FileGroup1 filegroups. Ideally, these will not involve very large tables. The other filegroups can then be restored in parallel with users using the system. They may not be able to run some reports or view archive data until those filegroups are done restoring, but basic usage will be available.

I hope this post was helpful.