PASS Summit 2013 Summary

The PASS Summit 2013 was held in Charlotte, North Carolina, which is a big change from the usual location in Seattle. This new location may have been more successful in attracting what seemed to be an unusually high number of first-time attendees, which I think is a good thing. I heard fewer complaints during the week about travel time and time away from work from people who were from the east coast and from the southeast part of the United States. On the other hand, I heard a decent number of complaints about travel time and time away from work from people from other parts of the United States. Honestly, unless the PASS Summit is held in your home town, you are going to have to travel, so there is really no way to satisfy everyone regarding a location. Personally, I like Seattle, and I know my way around the downtown Seattle area pretty well, and it is easier for me to fly to Seattle, so I am perfectly happy if the Summit is in Seattle most years. Since I live near Denver, it is not really that hard for me to travel nearly anywhere in the United States.

I had a pre-con session called Scaling SQL Server 2012 on Monday, which went quite well (even though Paul Randal fired me on Twitter). Teaching/speaking all day by yourself is a lot more tiring than you might think, especially when you have a room full of smart SQL Server people in your session.

On Wednesday, I got to present Professor DMV in one of the two large ballrooms (which was also streamed on PASStv and recorded). That was a lot of fun to present to a large session, and I got a lot of questions during and after the presentation. There was also a lot of live commentary on Twitter while I was speaking. I think that most speakers appreciate when you give live Twitter commentary and feedback while they are speaking (even if they read it later). I certainly appreciate it!

On Friday, I got to present Storage Fundamentals for the DBA in one of the smaller (300 seat) rooms. I was pretty surprised to see people lined up, waiting to get into this session. That has never happened to me at the PASS Summit. I am more used to seeing people lined up waiting to get into to see people like Paul Randal or Kimberly Tripp. Later on Friday, I watched a very good session on virtualization performance from David Klee.

I think the people behind the PASS Summit 2013 did a great job of planning, organizing and running the event. This includes the PASS board members, the PASS employees, all of the PASS volunteers, and all of the other people who worked so hard to make the event a smoothly running success. I have a lot of respect for the people that do all of the work to make the PASS Summit possible each year.

It was great to see so many people that I mostly talk to online, and to meet a lot of new people in person. I had a lot of people come up to me throughout the week and thank me for the work I do on my DMV Diagnostic Information Queries, which was very gratifying. All of the demo scripts from all of the SQLskills speakers (Paul, Kimberly, Erin, Jon, and myself) are posted here.

Performance Related Fixes in Post-SQL Server 2012 SP1 Builds

There have already been six Cumulative Updates (CU) for the Service Pack 1 branch of SQL Server 2012. There have been a fairly high number of hotfixes in every one of these early Cumulative Updates, as more people are using SQL Server 2012 over time. SQL Server 2012 SP1 has all of the fixes through SQL Server 2012 RTM CU2. Especially if you are getting ready to migrate to SQL Server 2012 from an earlier version, I think you should start out with the latest SQL Server 2012 Service Pack and Cumulative Update. Right now, that means Service Pack 1, CU6 (Build 11.0.3381).

This table shows the SP1 CU builds that have been released so far.

Build Description Release Date
11.0.3000 SP1 RTM November 7, 2012
11.0.3321 SP1 CU1 November 20, 2012
11.0.3339 SP1 CU2 January 21, 2013
11.0 3349 SP1 CU3 March 18, 2013
11.0.3368 SP1 CU4 May 30, 2013
11.0.3373 SP1 CU5 July 15, 2013
11.0.3381 SP1 CU6 September 16, 2013

Table 1: SQL Server 2012 SP1 CU Builds

You can follow the KB article link below to see all of the CU builds for the SQL Server 2012 Service Pack 1 branch.

The SQL Server 2012 builds that were released after SQL Server 2012 Service Pack 1 was released

Like I did in my previous post, I decided to scan the hotfix list for all of the Cumulative Updates in the SP1 branch, looking for performance and general reliability-related fixes for the SQL Server Database Engine. I came up with the list below, but this is completely arbitrary on my part. You may come up with a completely different list, based on what SQL Server 2012 features you are using.

Here are the fixes in the RTM branch, before Service Pack 1 was released:

 

SQL Server 2012 RTM Cumulative Update 1  (Build 11.0.2316) , 65 total public hot fixes

FIX: Low CPU spikes at a set interval even without user activity in SQL Server 2008 R2 or in SQL Server 2012

FIX: Slow performance when an AFTER trigger runs on a partitioned table in SQL Server 2008 R2 or in SQL Server 2012

FIX: It takes a long time to restore a database in SQL Server 2008 R2 or in SQL Server 2008 or in SQL 2012

FIX: Poor performance when you run a query that contains correlated AND predicates in SQL Server 2008 or in SQL Server 2008 R2 or in SQL Server 2012

FIX: Slow performance occurs in SQL Server 2008 R2 or in SQL Server 2012 if high CPU usage is observed with contention over the QUERY_EXEC_STATS spinlock

FIX: Out-of-memory error when you run SQL Server 2012 on a computer that uses NUMA

SQL Server 2012 RTM Cumulative Update 2  (Build 11.0.2325) , 37 total public hot fixes

FIX: An access violation occurs intermittently when you run a query against a table that has a columnstore index in SQL Server 2012

SQL Server 2012 RTM Cumulative Update 3  (Build 11.0.2332) , 35 total public hotfixes

FIX: Incorrect results when you run a parallel query that uses a columnstore index in SQL Server 2012

 

Here are the fixes in the Service Pack 1 branch:

 

SQL Server 2012 SP1 Cumulative Update 1 (Build 11.0.3321), 44 total public hot fixes

FIX: “out of memory” error when you use ODBC with SQL Server 2012, SQL Server 2008 R2 or SQL Server 2008

FIX: Performance of a SELECT statement that contains a LIKE operator and an ESCAPE clause is low in SQL Server 2008 R2 or in SQL Server 2012

FIX: SQL Server 2012, SQL Server 2008 R2 or SQL Server 2008 stops responding and a “Non-yielding Scheduler” error is logged

FIX: Slow performance or deadlock when you restore a database and execute statements at the same time in SQL Server 2012

FIX: Worker threads do not wake up immediately when multiple I/O-intensive tasks are running at the same in SQL Server 2012

FIX: “Process appears to be non-yielding on Scheduler ” error message when you run a query in SQL Server 2012

SQL Server 2012 SP1 Cumulative Update 2 (Build 11.0.3339), 50 total public hot fixes

A database transaction log continues to grow after you upgrade to SQL Server 2012, SQL Server 2008 R2 SP1, SQL Server 2008 SP2 or SQL Server 2008 SP3

FIX: The size of a database file is not reduced when you use the DBCC SHRINKFILE command in SQL Server 2008 R2 or in SQL Server 2012

FIX: High CPU usage when you query a binary large object column by using the NOLOCK hint in SQL Server 2008 R2 or in SQL Server 2012

FIX: Large queries that modify data run slower than expected when many locks accumulate in a SQL Server 2008 R2 or in SQL Server 2012

FIX: A transaction log restore operation takes longer than expected and a 3402 error occurs if a database file is removed in SQL Server 2008 R2 or in SQL Server 2012

SQL Server 2012 experiences out-of-memory errors

FIX: Deadlocks occur when you execute a stored procedure to alter a temporary table if lock partitioning is enabled in SQL Server 2008 R2 or in SQL Server 2012

FIX: Memory leak if you enable the AUTO_UPDATE_STATISTICS_ASYNC statistics option in SQL Server 2008, in SQL Server 2012 or in SQL Server 2008 R2

FIX: A “non-yielding” error occurs on a server for Service Broker Message Forwarding in SQL Server 2008 R2 or in SQL Server 2012

FIX: Error 17883 when you run a query on a server that has many CPUs and a large amount of memory in SQL Server 2012

SQL Server 2012 SP1 Cumulative Update 3 (Build 11.0.3349), 38 total public hot fixes

FIX: Access violation when you run a query that contains many constant values in an IN clause in SQL Server 2008 or in SQL Server 2012

FIX: Memory leak when you run queries against a temporary table in a nested stored procedure in SQL Server 2012

FIX: Poor performance in SQL Server 2012 when you run a SQL Server trace

FIX: You experience poor performance when you run a query against an RCSI-enabled table in SQL Server 2012

FIX: CPU spike when there is no load on a server after you install SQL Server 2012 on the server

SQL Server 2012 experiences performance issues in NUMA environments

SQL Server 2012 SP1 Cumulative Update 4 (Build 11.0.3368), 38 total public hot fixes

FIX: Out-of-memory errors related to a memory clerk in SQL Server 2012

FIX: Out of memory error when you build a columnstore index on partitioned tables in SQL Server 2012

Update that improves the Service Broker when you send messages to remote servers in SQL Server 2012 is available

An update is available for SQL Server 2012 Memory Management

FIX: “Non-yielding Scheduler” error occurs when you insert a row in SQL Server 2012

FIX: You may experience performance issues in SQL Server 2012

SQL Server 2012 SP1 Cumulative Update 5 (Build 11.0.3373), 27 total public hot fixes

FIX: Poor performance when you run a query that uses the LIKE operator in the WHERE clause in SQL Server 2008 R2

FIX: Non-yielding scheduler error when you run a stored procedure that uses a TVP in SQL Server 2012

SQL Server 2012 SP1 Cumulative Update 6 (Build 11.0.3381), 24 total public hot fixes

FIX: Slow performance on SQL Server 2012 linked server when you update the data together with different collation on the remote server

FIX: A memory leak occurs when you enable AlwaysOn Availability Groups or SQL Server failover cluster in Microsoft SQL Server 2012

 

Again, the idea here is to give you some concrete reasons to want to stay current with the latest SQL Server 2012 SP and CU, by pointing out some of the more valuable fixes in each CU in the Service Pack 1 branch.

Scaling SQL Server 2012 Pre-Conference Session

In just a couple of weeks, I will be giving a full day pre-conference session at the SQLPASS 2013 Conference in Charlotte, North Carolina. My pre-conference session will be on Monday, October 14, 2013. This pre-con’s going to be a lot of fun, and it will be a great way to kick off the SQLPASS 2013 Conference.

What you’ll learn

You’re a DBA, database developer, or system admin who must maintain a database server that is not performing and scaling well. You are not sure where the main scalability problems are or what you can do to solve them. The thought of picking out the best server and storage subsystem without making an expensive mistake makes you more than a little bit nervous.

This pre-conference session will cover the following topics and more:

  • Top scalability issues with SQL Server 2012
  • How you can postpone the scaling decision by finding and removing bottlenecks
  • How to use my SQL Server Diagnostic Information Queries to pinpoint performance issues
  • How to select and size your hardware and storage subsystem for maximum scalability
  • How to select hardware to get the best performance while minimizing your SQL Server 2012 licensing costs
  • How to use the scaling features built into SQL Server 2012 and 2014
  • How to scale up SQL Server 2012
  • How to use engineering techniques to scale out SQL Server 2012

Here is the full abstract:

Scaling SQL Server 2012

SQL Server implementations can quickly evolve and become more complex, forcing DBAs and developers to think about how they can scale their solution quickly and effectively. Scaling up is relatively easy (but can be expensive), while scaling out requires significant engineering time and effort. If you suggest hardware upgrades you may be accused of simply “throwing hardware at the problem”, and if you try to scale out, you may be thwarted by a lack of development resources or 3rd party software restrictions. As your database server nears its load capacity, what can you do? This session gives you concrete, practical advice on how to deal with this situation. Starting with your present workload, configuration and hardware, we will explore how to find and alleviate bottlenecks, whether they are workload related, configuration related, or hardware related. Next, we will cover how you can decide whether you should scale up or scale out your data tier. Once that decision is made, you will learn how to scale up properly, with nearly zero down-time. If you decide to scale out, you will learn about practical, production-ready techniques such as vertical partitioning, horizontal partitioning, and data dependent routing. We will also cover how to use middle-tier caching and other application techniques to increase your overall scalability.

How much does it cost?

When you register for the PASS Summit, my “Scaling SQL Server” pre-conference session is just $395.00. If you’ve already registered for the PASS 2013 Summit, email Shannon.Cunningham@sqlpass.org to take advantage of this opportunity.

SQL Server Diagnostic Information Queries for September 2013

I have made some minor updates and bug fixes for all of my SQL Server Diagnostic Information Queries for this month. I have also added a new version for SQL Server 2014.

Rather than having a separate blog post for each version, I’ll just put the links for all five versions here.

SQL Server 2005 Diagnostic Information Queries              SQL Server 2005 Blank Results

SQL Server 2008 Diagnostic Information Queries              SQL Server 2008 Blank Results

SQL Server 2008 R2 Diagnostic Information Queries         SQL Server 2008 R2 Blank Results

SQL Server 2012 Diagnostic Information Queries              SQL Server 2012 Blank Results

SQL Server 2014 Diagnostic Information Queries              SQL Server 2014 Blank Results

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master, system database.

The idea is that you would run each query in the set, one at a time (after reading the directions). You need to click on the top left square of the results grid to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the spreadsheet. There are also some comments on how to interpret the results after each query.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have databases in 80 compatibility mode, which breaks many DMV queries.

Please let me know what you think of these queries. Thanks!

How To Avoid Orphaned Database Users with SQL Server Authentication

One common issue that database administrators often run into is the old, familiar “orphaned” user problem. This happens when you use SQL Server Authentication to create a SQL Server login on your database server. When you do this, SQL Server generates a unique SID for that SQL Server login. After you create the SQL Server login, you typically create a database user in a user database on that server instance, and associate the database user with that SQL Server login.

This works fine until you try to restore that user database to another SQL Server instance. If you previously created a SQL Server login with the same UserID on the new server, the SID for that SQL Server login will not match the database user in the user database that you have restored (from the other database instance). Hence the term “orphaned” user.  This is an especially big issue if you are using database mirroring, since your database users will be orphans when you failover from one instance to the other instance. It is also an issue with log shipping, and it often comes up when you migrate from an old database server to a new database server.

There are several ways to fix this, but the best thing (outside of just using Windows Authentication and avoiding the problem), is to create the new server login using the same SID as on the original server. Just like you see below:

-- Get Sids for all SQL Server logins on the old server instance
SELECT name, [sid] 
FROM sys.server_principals
WHERE [type] = 's'; 

-- Create new SQL Login on new server instance
IF  EXISTS (SELECT * FROM sys.server_principals WHERE name = N'SQLAppUser')
    DROP LOGIN SQLAppUser;
GO

-- Use the sid from the old server instance 
CREATE LOGIN SQLAppUser WITH PASSWORD = N'YourStrongPassword#', sid = 0x2F5B769F543973419BCEF78DE9FC1A64,
DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF;
GO

New Versions of Useful Tools for SQL Server

There is a completely rewritten version of Geekbench, which is a very useful cross-platform processor and memory performance benchmark that you can use for comparing and validating processor and memory performance on anything from your laptop to a very expensive database server in just a few minutes, with no configuration of the benchmark needed. As the Primate Labs web site puts it:

Geekbench 3 is Primate Labs’ next-generation processor benchmark, with a new scoring system that separates single-core and multi-core performance, and new workloads that simulate real-world scenarios. Geekbench 3 makes it easier than ever to find out if your computer is up to speed.

There is also a new 1.66 version of CPU-Z, that has support for some upcoming processors, such as the Intel Xeon E5-2600 v2 series (Ivy Bridge-EP).

image thumb New Versions of Useful Tools for SQL Server

SQL Server 2012 Diagnostic Information Queries for August 2013

I have gone through and made some minor updates and bug fixes for all of my SQL Server 2012 Diagnostic Information Queries for this month, along with adding two new queries at the end of the script. The links for the queries and the blank results spreadsheet are below.

SQL Server 2012 Diagnostic Information Queries                SQL Server 2012 Blank Results

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master, system database.

The idea is that you would run each query in the set, one at a time (after reading the directions). You need to click on the top left square of the results grid to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the spreadsheet.

There are also some comments on how to interpret the results after each query. Please let me know what you think of these queries.

Experiments with SQL Server VLFs, Part 1

Since SQL Server 2012 is relatively easy to install and get running, using just the default configuration settings, I thought it would be interesting to explore the negative performance effects of some of these default configuration settings, using a simple, repeatable test.

My test machine is a desktop machine with an Intel Core i7-3770K processor with 32GB of RAM, and two fast, consumer-level SSDs. The C: drive is a 512GB OCZ Vertex 4, and the L: drive is a faster, 256GB Samsung 840 Pro. Both SSDs are plugged into 6Gbps SATA III ports, and they are performing as I would expect.

For my test database, I downloaded a copy of the 2008 Credit database and restored it to a SQL Server 2012 SP1 CU5 instance. This database has a table called charge, that has 1.6 million rows of data. The database starts out with a 1000MB data file, and a 400MB log file with 15 VLFs. The database is in the SIMPLE recovery model, and both the data and log files have their autogrowth set to 10% (which is not the best choice). I used the MOVE command to place the SQL Server log file for the database on my separate, faster L: drive, so I would get the fastest possible sequential write performance for the log file.

To make this experiment more interesting, I changed the recovery model to FULL, and I changed the autogrowth for both the data and log files to 1MB (which is a really bad idea). Finally, I took a compressed, full database backup to have a SQL Server 2012 baseline for the experiments, as shown in Listing 1:

Initial Preparation of the Credit Database

USE [master];

-- Start of initial preparation
-- Restore from the original SQL Server 2008 backup with MOVE if necessary
-- This will upgrade it to SQL Server 2012 format
RESTORE DATABASE [Credit] 
FROM  DISK = N'C:\SQLBackups\CreditBackup100.bak' WITH  FILE = 1,  
MOVE N'CreditData' TO N'C:\SQLData\CreditData.mdf',  
MOVE N'CreditLog' TO N'L:\SQLLogs\CreditLog.ldf',  NOUNLOAD,  STATS = 1;
GO

-- Change compatibility level to 110 (SQL Server 2012)
-- This is different from the database format
ALTER DATABASE [Credit] SET COMPATIBILITY_LEVEL = 110;
GO
-- Change recovery model to FULL
ALTER DATABASE [Credit] SET RECOVERY FULL WITH NO_WAIT;
GO
-- Change file growth for data file to 1 MB (This is a bad thing to do!)
ALTER DATABASE [Credit] MODIFY FILE (NAME = N'CreditData', FILEGROWTH = 1MB);
GO
-- Change file growth for data file to 1 MB (This is a bad thing to do!)
ALTER DATABASE [Credit] MODIFY FILE (NAME = N'CreditLog', FILEGROWTH = 1MB)
GO

-- Take a compressed full backup for a baseline
BACKUP DATABASE [Credit] 
TO  DISK = N'C:\SQLBackups\CreditBackup110.bak' WITH NOFORMAT, INIT,  
NAME = N'Credit-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 1;
GO
-- End of initial preparation

Listing 1: Initial Preparation and Backup of Baseline Database

Standard Test Details

After the initial preparation, I ran a test, where I doubled the number of rows in the charge table every time I went through a loop, with a simple INSERT…SELECT pattern, also incrementing the charge_dt to a later date for each new set of charges. I added some timing information, and gathered some information about cumulative waits, VLFs, and file sizes for each test run. I dropped and then restored the baseline SQL Server 2012 Credit database before each test run. I also restarted the SQL Server Service before each test run after I made my configuration changes. Listing 2 shows the standard test loop that I ran for each configuration. Listing 3 shows the cleanup I did after each test was done.

-- Start Standard Test
USE Credit;
GO

-- Supress done messages
SET NOCOUNT ON;

-- Get starting row count and size of Charge table
EXEC sp_spaceused N'dbo.charge';

-- Get intial VLF count
DBCC LogInfo;

-- Individual File Sizes and space available for current database  
SELECT f.name AS [File Name] , f.physical_name AS [Physical Name], 
CAST((f.size/128.0) AS decimal(15,2)) AS [Total Size in MB],
CAST(f.size/128.0 - CAST(FILEPROPERTY(f.name, 'SpaceUsed') AS int)/128.0 AS decimal(15,2)) 
AS [Available Space In MB], [file_id], fg.name AS [Filegroup Name]
FROM sys.database_files AS f WITH (NOLOCK) 
LEFT OUTER JOIN sys.data_spaces AS fg WITH (NOLOCK) 
ON f.data_space_id = fg.data_space_id OPTION (RECOMPILE);

-- Clear Wait Stats 
DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR);

-- Declare and initialize some variables
DECLARE @LoopCount int = 1;
DECLARE @DateDiff int = 30;
DECLARE @Start datetime = GETDATE();
DECLARE @LoopDone datetime;
DECLARE @End datetime;

-- Double the size of the charge table each time, changing charge_dt to a higher value each loop
WHILE @LoopCount < 6
    BEGIN
        INSERT INTO dbo.charge 
        (member_no, provider_no, category_no, charge_dt, charge_amt, statement_no, charge_code)
        SELECT member_no, provider_no, category_no, charge_dt + @DateDiff, charge_amt, statement_no, charge_code
        FROM dbo.charge;

        -- Do some timing and housekeeping
        SET @LoopDone = GETDATE();
        PRINT N'Loop ' + CONVERT(NVARCHAR(5), @LoopCount) + N' done in ' + CONVERT(NVARCHAR(5), DATEDIFF(second, @Start, @LoopDone)) + N' seconds';
        SET @DateDiff += 30;
        SET @LoopCount += 1;
    END

SET @End = GETDATE();

SELECT DATEDIFF(second, @Start, @End) AS [Total Elapsed Time (sec)];

-- Isolate top waits for server instance since last restart or statistics clear 
-- SQL Server 2012 and newer specific version
WITH Waits
AS (SELECT wait_type, CAST(wait_time_ms / 1000. AS DECIMAL(12, 2)) AS [wait_time_s],
    CAST(100. * wait_time_ms / SUM(wait_time_ms) OVER () AS decimal(12,2)) AS [pct],
    ROW_NUMBER() OVER (ORDER BY wait_time_ms DESC) AS rn
    FROM sys.dm_os_wait_stats WITH (NOLOCK)
    WHERE wait_type NOT IN (N'CLR_SEMAPHORE', N'LAZYWRITER_SLEEP', N'RESOURCE_QUEUE',N'SLEEP_TASK',
                            N'SLEEP_SYSTEMTASK', N'SQLTRACE_BUFFER_FLUSH', N'WAITFOR', N'LOGMGR_QUEUE',
                            N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH', N'XE_TIMER_EVENT',
                            N'BROKER_TO_FLUSH', N'BROKER_TASK_STOP', N'CLR_MANUAL_EVENT', N'CLR_AUTO_EVENT',
                            N'DISPATCHER_QUEUE_SEMAPHORE' ,N'FT_IFTS_SCHEDULER_IDLE_WAIT', N'XE_DISPATCHER_WAIT',
                            N'XE_DISPATCHER_JOIN', N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', N'ONDEMAND_TASK_QUEUE',
                            N'BROKER_EVENTHANDLER', N'SLEEP_BPOOL_FLUSH', N'SLEEP_DBSTARTUP', N'DIRTY_PAGE_POLL',
                            N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',N'SP_SERVER_DIAGNOSTICS_SLEEP')),
Running_Waits 
AS (SELECT W1.wait_type, wait_time_s, pct,
    SUM(pct) OVER(ORDER BY pct DESC ROWS UNBOUNDED PRECEDING) AS [running_pct]
    FROM Waits AS W1)
SELECT wait_type, wait_time_s, pct, running_pct
FROM Running_Waits
WHERE running_pct - pct <= 99
ORDER BY running_pct
OPTION (RECOMPILE);

-- Get ending VLF count of current database
DBCC LogInfo;

-- Individual File Sizes and space available for current database  
SELECT f.name AS [File Name] , f.physical_name AS [Physical Name], 
CAST((f.size/128.0) AS decimal(15,2)) AS [Total Size in MB],
CAST(f.size/128.0 - CAST(FILEPROPERTY(f.name, 'SpaceUsed') AS int)/128.0 AS decimal(15,2)) 
AS [Available Space In MB], [file_id], fg.name AS [Filegroup Name]
FROM sys.database_files AS f WITH (NOLOCK) 
LEFT OUTER JOIN sys.data_spaces AS fg WITH (NOLOCK) 
ON f.data_space_id = fg.data_space_id OPTION (RECOMPILE);

-- Get ending row count and size of Charge table
EXEC sp_spaceused N'dbo.charge';
GO
-- End Standard Test

Listing 2: Standard Test

-- Cleanup after each test run
-- Drop Credit database
USE [master]
GO
ALTER DATABASE [Credit] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
DROP DATABASE [Credit];
GO

-- Restore from our initial baseline SQL Server 2012 full backup
RESTORE DATABASE [Credit] 
FROM  DISK = N'C:\SQLBackups\CreditBackup110.bak' WITH  FILE = 1,  
MOVE N'CreditData' TO N'C:\SQLData\CreditData.mdf',  
MOVE N'CreditLog' TO N'L:\SQLLogs\CreditLog.ldf',  NOUNLOAD,  STATS = 1;
GO

-- Restart SQL Server Service

-- End of Cleanup

Listing 3: Cleanup After Each Test Run

After all of this preparation, what did I actually want to test? Each test would measure the elapsed time and ending Virtual Log File (VLF) count after inserting 49.4 million rows into the credit table, in ever larger batches of INSERTs. I also wanted to see what the top cumulative wait statistics were for each test run.

Test Configurations and Results

Test 1

Disable Windows Instant File Initialization (IFI), which is the default condition when you install SQL Server, unless you choose to enable it. I used the very bad autogrowth value of 1MB for both the data file and the log file of the database. This is a very bad combination that you should not emulate! I expected this configuration to perform relatively poorly, and I was not surprised by the results. This test took 1074 seconds to complete, and the VLF count went up to 126,399. Table 1 shows the top cumulative wait types during this test run. The PREEMPTIVE_OS_xxx waits are related to the fact that we have such a small file autogrowth increment and that Windows Instant File Initialization is not enabled.

 

Wait Type Percent Running Percent
CXPACKET 72.22 72.22
PREEMPTIVE_OS_FLUSHFILEBUFFERS 13.01 85.23
PREEMPTIVE_OS_WRITEFILEGATHER 6.52 91.75
WRITELOG 2.13 93.88
PREEMPTIVE_OS_FILEOPS 1.39 95.27
PAGEIOLATCH_EX 0.97 96.24
ASYNC_IO_COMPLETION 0.82 97.06
WRITE_COMPLETION 0.71 97.77
IO_COMPLETION 0.67 98.44
LATCH_SH 0.62 99.06

Table 1: Test 1 Cumulative Wait Types

Test 2

Enable Windows Instant File Initialization (making sure to restart the SQL Server Service). I used the same bad autogrowth value of 1MB for both the data file and the log file of the database.  I expected this configuration to still perform relatively poorly. This test took 1069 seconds to complete, and the VLF count went up to 126,399. The lower elapsed time was really within the margin of error, so just enabling Windows Instant File Initialization was not really helping in a meaningful way (by itself). Keep in mind that IFI only affects the data file, not the log file. Table 2 shows the top cumulative wait types during this test run. The cumulative wait types during Test 2 did not really change in a meaningful way.

Wait Type Percent Running Percent
CXPACKET 72.88 72.88
PREEMPTIVE_OS_FLUSHFILEBUFFERS 13.34 86.22
PREEMPTIVE_OS_WRITEFILEGATHER 6.4 92.62
PREEMPTIVE_OS_FILEOPS 1.39 94.01
WRITELOG 1.12 95.13
WRITE_COMPLETION 1.01 96.14
PAGEIOLATCH_EX 0.87 97.01
ASYNCH_IO_COMPLETION 0.83 97.84
LATCH_SH 0.67 98.51
IO_COMPLETION 0.65 99.16

Table 2: Test 2 Cumulative Wait Types

Test 3

Enable Windows Instant File Initialization (making sure to restart the SQL Server Service). I changed the autogrowth value to a more reasonable 1024MB for both the data file and the log file of the database.  I expected this configuration to perform much better, with a reduced VLF count in the log file. This test took 771 seconds to complete, and the VLF count only went up to 246, which was much better. Table 3 shows the top cumulative wait types during this test run. We can see that PREEMPTIVE_OS_FLUSHFILEBUFFERS drops completely off the list when we have much fewer file growths during the test.

Wait Type Percent Running Percent
CXPACKET 88.95 88.95
PREEMPTIVE_OS_WRITEFILEGATHER 7.48 96.43
ASYNCH_IO_COMPLETION 1.11 97.54
PAGEIOLATCH_EX 0.89 98.43
LATCH_SH 0.88 99.31

Table 3: Test 3 Cumulative Wait Types

Test 4

Enable Windows Instant File Initialization (making sure to restart the SQL Server Service). I changed the autogrowth value to a more reasonable 1024MB for both the data file and the log file of the database.  I also pre-grew the log file to 32000MB, in 4000MB increments (to reduce the VLF count and eliminate any log file autogrowths during the test). I expected this configuration to perform a little better than Test 3, with a reduced VLF count in the log file. This test took 711 seconds to complete, and the VLF count only went up to 127, which was even better. Table 4 shows the top cumulative wait types during this test run. Notice that the PREEMPTIVE_OS_WRITEFILEGATHER wait type is greatly reduced by this latest configuration, since we are only seeing a few file growths for the data file and none for the log file.

Wait Type Percent Running Percent
CXPACKET 94.74 94.74
PREEMPTIVE_OS_WRITEFILEGATHER 1.57 96.31
ASYNCH_IO_COMPLETION 1.13 97.44
PAGEIOLATCH_EX 1.12 98.56
LATCH_SH 0.94 99.50

Table 4: Test 4 Cumulative Wait Types

 

Test 5

Test 5 used the same configuration steps as Test 4, except that I changed the instance-level MAXDOP setting to 4 (since I had a single, quad-core processor with hyper-threading enabled). This change had no measurable effect, taking 716 seconds to complete, and the VLF count stayed at 127. Since there is only one NUMA node on my desktop machine, I did not expect any improvement here. Table 5 shows the top cumulative wait types during this test run.

Wait Type Percent Running Percent
CXPACKET 92.69 92.69
PREEMPTIVE_OS_WRITEFILEGATHER 2.71 95.40
ASYNCH_IO_COMPLETION 1.89 97.29
PAGEIOLATCH_EX 1.68 98.97
LATCH_SH 0.63 99.60

Table 5: Test 5 Cumulative Wait Types

Test 6

Test 6 used the same configuration steps as Test 5, except that I changed the tempdb configuration by adding three additional tempdb data files and I made all of the tempdb data files the same initial size (2048MB) with an autogrowth increment of 1024MB. I also made the tempdb log file a larger 1024MB initial size, with an autogrowth increment of 1024MB. This change also had no measurable effect, taking 723 seconds to complete, with the VLF count staying at 127. This was somewhat of a surprise, since I could see the original tempdb data file being hit pretty hard during the test runs. Table 6 shows the top cumulative wait types during this test run.

Wait Type Percent Running Percent
CXPACKET 94.07 94.07
ASYNCH_IO_COMPLETION 1.96 96.03
PREEMPTIVE_OS_WRITEFILEGATHER 1.96 97.99
PAGEIOLATCH_EX 1.83 99.82

Table 6: Test 6 Cumulative Wait Types

Part One Conclusions

We can see from these tests that it is very beneficial to make sure that you use a reasonable autogrowth increment size for both your data file(s) and log file rather than an extremely small autogrowth increment size. It is also very beneficial to manually pre-grow your log file to an appropriate size, in 4000MB increments so that it does not have to autogrow during normal and maintenance operations. This will also keep your VLF count under control.

It is also beneficial to enable Windows Instant File Initialization (IFI), even though it only affects your SQL Server data file(s). More tests in Part 2 will show some of the other advantages of IFI for SQL Server. You can get the complete script and some more detailed results here. Table 7 shows the overall test results for Part 1. Keep in mind that this testing is using relatively high performance, consumer-level SSDs, so the negative effects of lots of small file growths in the data and log files are not as severe as they might otherwise be with magnetic disk storage.

Test Run Elapsed Time VLF Count
Test  1 1074 seconds 126,399
Test 2 1069 seconds 126,399
Test 3 771 seconds 246
Test 4 711 seconds 127
Test 5 716 seconds 127
Test 6 723 seconds 127

Table 7: Part One Test Results

Building an Intel Haswell Desktop System for SQL Server 2012 Development

The 22nm 4th generation Intel Core desktop processors (aka Haswell) have been available for sale for a couple of months now, so I have had some requests to put together a suggested component list for a desktop system that would be suitable for SQL Server 2012 development and testing usage. As you may know, the desktop Haswell processors use Socket 1150, which is incompatible with the older Socket 1155 that was used in the previous desktop 32nm Sandy Bridge and 22nm Ivy Bridge processors.

That means you will have to get a new motherboard to use a Haswell processor. You should get a Z87 chipset motherboard so that you can get a nice feature set with the motherboard. You want to make sure you get a motherboard that has four memory slots, instead of two, so you can have 32GB of RAM. You also want to pay attention to how many total SATA III ports you are getting. The Z87 chipset natively supports six SATA III ports (with hardware RAID support), and many motherboards will have an extra Marvell controller that can support either two or four more SATA III ports (with hardware RAID support). Having SATA III support is vital for modern SATA III solid state drives.

This system will have an Intel Core i7-4770K quad-core  processor (plus hyper-threading), and 16GB of RAM (which can be expanded to 32GB of RAM for about $129.00 more). You could also back down to an Intel Core i5-4670K processor, which is a quad-core without hyper-threading. The Core i5 has a slightly lower base and Turbo clock speed and a smaller 6MB L3 cache compared to the Core i7, but it is $80.00 less at Micro Center. Both of these processors have Intel HD Graphics 4600 integrated graphics, which save you the cost and extra power usage of getting a discrete video card. You would probably have to spend about $100.00 to get a discrete video card that has better performance than the Intel HD Graphics 4600 integrated graphics, and I just don’t think you will need to do that for normal desktop usage.

This system will have better CPU and memory performance than many older production database servers (although you are limited to 32GB of RAM). Depending on what you want to do with this system, you may need or want to add additional SATA III SSDs or conventional hard drives. If you skip the optical drive, you can add seven more drives to this system before you run out of drive bays and SATA III ports. With enough fast SSDs, you may have better I/O performance (under a light load) than many production database servers. On the other hand, you won’t have redundant components (such as dual power supplies) like you would have with a rack-mounted database server. You will also have SATA consumer-level SSDs that cannot handle a heavy server workload with consistent performance as well as expensive enterprise-level SAS SSDs.

Below, I have links to the manufacturer information about each component, along with links to the components at Micro Center and NewEgg.

Motherboard               Gigabyte GA-Z87X-UD4H                              Micro Center                                   NewEgg

This Gigabyte motherboard has a total of eight SATA III slots between two separate controllers, with hardware RAID support (with no cache memory). In my experience, Gigabyte motherboards typically let you install server operating systems (including Windows Server 2008 R2 or Windows Server 2012) without any driver issues. Another alternative would be Windows 8 Professional, with Hyper-V, so you can run Windows Server 2012 in VMs. Micro Center is currently selling these for an insanely low sale price of $114.99, plus you get $40.00 more off the motherboard when you buy it with an eligible processor. Update: This motherboard actually uses an Intel NIC, that Intel (in their infinite wisdom) does not allow you to install the NIC drivers on a server operating system, such as Windows Server 2012.

Processor                    Intel Core i7-4770K                                      Micro Center                                   NewEgg

This is the “top of the line” Haswell desktop processor, with an unlocked multiplier. It is also the main Core i7 processor that Micro Center carries and is eligible for their $40.00 motherboard/processor bundle discount. It does support VT-x with Extended Page Tables for hardware virtualization support, but it does not support VT-d for directed IO with virtualization. If you are really concerned about VT-d, you can always get an Intel Core i7-3770 (that does have VT-d) from NewEgg or Micro Center. It will cost $309.99 at New Egg or $249.99 at Micro Center, and you would not get the motherboard bundle discount from Micro Center. I would say that with a decent number of good SSDs, you are much less likely to have any I/O bottlenecks with virtualization.

Power Supply              Corsair CX500M                                           Micro Center                                   NewEgg

This is a high-quality, modular power supply that can easily support a system like this. It has an 80 Plus Bronze efficiency rating, which is pretty good. A lower wattage power supply is more efficient at lower output levels than a higher wattage power supply, so a power supply like this will save you money over time and be less expensive to buy.

Case                            Fractal Design Define R4                               Micro Center                                   NewEgg

These Fractal Design cases get universally excellent reviews, and they are very easy to work on when you are building the system, with excellent cable management features. They are also very quiet, with sound deadening foam inside. This case has eight 2.5”/3.5” drive bays and two USB 3.0 ports on the top front of the case. It does not have any silly gaming features.

Memory                       Crucial Ballistix Sport 16GB DDR3-1600         Micro Center                                   NewEgg

This is pretty decent memory that is eligible for the $10.00 bundle discount from Micro Center when you buy it with a motherboard. NewEgg’s price is actually a little cheaper on this one item. You could also spend more money on higher speed memory, which you may or may notice that much benefit from in real life.

System Drive               256GB Samsung 840 Pro SSD                       Micro Center                                   NewEgg

These are one of the top consumer SSDs available right now, with lots of good reviews. I have bought a number of these and they are very fast. They are also eligible for a $20.00 bundle discount from Micro Center when you buy them with a motherboard or processor.

Optical Drive               24X LG DVDRW  OEM                                   Micro Center                                   NewEgg

I still like to have an optical drive, even though I rarely use it. If you have an external USB optical drive, you can use that to install the OS, or you could use a thumb drive.

As you can see below, if you are lucky enough to live near a Micro Center, you can save a significant amount of money by getting all of these components from Micro Center instead of NewEgg. You will have to pay sales tax at Micro Center, while you probably won’t at NewEgg. Most of the components (except the case) have free shipping from NewEgg.

Item Model Micro Center Price NewEgg Price
Motherboard Gigabyte GA-Z87X-UD4H 114.99 – 40.00 Bundle 189.99 w/FS
Processor Intel Core i7-4770K 279.99 339.99 w/FS
Power Supply Corsair CX500M 59.99 – 10.00 MIR 69.99 – 10.00 MIR
Case Fractal Design Define R4 89.99 99.99 + 9.99 Ship
Memory Crucial Ballistix Sport 129.99 – 10.00 Bundle 115.99 w/FS
System Drive 256GB Samsung 840 Pro 239.99 – 20.00 Bundle 239.99 w/FS
Optical Drive 24X LG DVDRW  OEM 15.99 17.99 w/FS
Total 850.93 1073.92

I’ll also have a post up in the near future that talks about how to build an Intel Sandy Bridge-E or Intel Ivy Bridge-E system, that can have six-cores (plus hyper-threading) and 64GB of RAM. One of those systems will be considerably more expensive, due to a more expensive motherboard, more expensive processor, and more RAM.

24 Hours of PASS: Summit Preview

I had the opportunity to present a preview session of my upcoming day-long pre con, Scaling SQL Server 2012 for the last slot of the 24 Hours of PASS: Summit Preview starting at 5:00AM Mountain Time on August 1. I think the session went pretty well, after some initial technical difficulties on my part. The session recording will be available in the next couple of weeks. I’ll update this post with the recording link once its available. In the meantime, you can download my deck here.

I also got to do a virtual session for the PASS Performance Virtual Chapter on July 25. The session recording is here. You can download my deck here.

Just in case these sessions whetted your appetite for a much deeper level of knowledge about SQL Server Hardware and how to scale SQL Server 2012, you should consider attending the upcoming two-day IEHW: Immersion Event on SQL Server Hardware course that I will be teaching in Bellevue, WA on September 18-19, 2013.