Scaling SQL Server 2012 Pre-Conference Session

In just a couple of weeks, I will be giving a full day pre-conference session at the SQLPASS 2013 Conference in Charlotte, North Carolina. My pre-conference session will be on Monday, October 14, 2013. This pre-con’s going to be a lot of fun, and it will be a great way to kick off the SQLPASS 2013 Conference.

What you’ll learn

You’re a DBA, database developer, or system admin who must maintain a database server that is not performing and scaling well. You are not sure where the main scalability problems are or what you can do to solve them. The thought of picking out the best server and storage subsystem without making an expensive mistake makes you more than a little bit nervous.

This pre-conference session will cover the following topics and more:

  • Top scalability issues with SQL Server 2012
  • How you can postpone the scaling decision by finding and removing bottlenecks
  • How to use my SQL Server Diagnostic Information Queries to pinpoint performance issues
  • How to select and size your hardware and storage subsystem for maximum scalability
  • How to select hardware to get the best performance while minimizing your SQL Server 2012 licensing costs
  • How to use the scaling features built into SQL Server 2012 and 2014
  • How to scale up SQL Server 2012
  • How to use engineering techniques to scale out SQL Server 2012

Here is the full abstract:

Scaling SQL Server 2012

SQL Server implementations can quickly evolve and become more complex, forcing DBAs and developers to think about how they can scale their solution quickly and effectively. Scaling up is relatively easy (but can be expensive), while scaling out requires significant engineering time and effort. If you suggest hardware upgrades you may be accused of simply “throwing hardware at the problem”, and if you try to scale out, you may be thwarted by a lack of development resources or 3rd party software restrictions. As your database server nears its load capacity, what can you do? This session gives you concrete, practical advice on how to deal with this situation. Starting with your present workload, configuration and hardware, we will explore how to find and alleviate bottlenecks, whether they are workload related, configuration related, or hardware related. Next, we will cover how you can decide whether you should scale up or scale out your data tier. Once that decision is made, you will learn how to scale up properly, with nearly zero down-time. If you decide to scale out, you will learn about practical, production-ready techniques such as vertical partitioning, horizontal partitioning, and data dependent routing. We will also cover how to use middle-tier caching and other application techniques to increase your overall scalability.

How much does it cost?

When you register for the PASS Summit, my “Scaling SQL Server” pre-conference session is just $395.00. If you’ve already registered for the PASS 2013 Summit, email Shannon.Cunningham@sqlpass.org to take advantage of this opportunity.

SQL Server Diagnostic Information Queries for September 2013

I have made some minor updates and bug fixes for all of my SQL Server Diagnostic Information Queries for this month. I have also added a new version for SQL Server 2014.

Rather than having a separate blog post for each version, I’ll just put the links for all five versions here.

SQL Server 2005 Diagnostic Information Queries              SQL Server 2005 Blank Results

SQL Server 2008 Diagnostic Information Queries              SQL Server 2008 Blank Results

SQL Server 2008 R2 Diagnostic Information Queries         SQL Server 2008 R2 Blank Results

SQL Server 2012 Diagnostic Information Queries              SQL Server 2012 Blank Results

SQL Server 2014 Diagnostic Information Queries              SQL Server 2014 Blank Results

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master, system database.

The idea is that you would run each query in the set, one at a time (after reading the directions). You need to click on the top left square of the results grid to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the spreadsheet. There are also some comments on how to interpret the results after each query.

Note: These queries are stored on Dropbox. I occasionally get reports that the links to the queries and blank results spreadsheets do not work, which is most likely because Dropbox is blocked wherever people are trying to connect.

I also occasionally get reports that some of the queries simply don’t work. This usually turns out to be an issue where people have databases in 80 compatibility mode, which breaks many DMV queries.

Please let me know what you think of these queries. Thanks!

How To Avoid Orphaned Database Users with SQL Server Authentication

One common issue that database administrators often run into is the old, familiar “orphaned” user problem. This happens when you use SQL Server Authentication to create a SQL Server login on your database server. When you do this, SQL Server generates a unique SID for that SQL Server login. After you create the SQL Server login, you typically create a database user in a user database on that server instance, and associate the database user with that SQL Server login.

This works fine until you try to restore that user database to another SQL Server instance. If you previously created a SQL Server login with the same UserID on the new server, the SID for that SQL Server login will not match the database user in the user database that you have restored (from the other database instance). Hence the term “orphaned” user.  This is an especially big issue if you are using database mirroring, since your database users will be orphans when you failover from one instance to the other instance. It is also an issue with log shipping, and it often comes up when you migrate from an old database server to a new database server.

There are several ways to fix this, but the best thing (outside of just using Windows Authentication and avoiding the problem), is to create the new server login using the same SID as on the original server. Just like you see below:

-- Get Sids for all SQL Server logins on the old server instance
SELECT name, [sid] 
FROM sys.server_principals
WHERE [type] = 's'; 

-- Create new SQL Login on new server instance
IF  EXISTS (SELECT * FROM sys.server_principals WHERE name = N'SQLAppUser')
    DROP LOGIN SQLAppUser;
GO

-- Use the sid from the old server instance 
CREATE LOGIN SQLAppUser WITH PASSWORD = N'YourStrongPassword#', sid = 0x2F5B769F543973419BCEF78DE9FC1A64,
DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF;
GO

New Versions of Useful Tools for SQL Server

There is a completely rewritten version of Geekbench, which is a very useful cross-platform processor and memory performance benchmark that you can use for comparing and validating processor and memory performance on anything from your laptop to a very expensive database server in just a few minutes, with no configuration of the benchmark needed. As the Primate Labs web site puts it:

Geekbench 3 is Primate Labs’ next-generation processor benchmark, with a new scoring system that separates single-core and multi-core performance, and new workloads that simulate real-world scenarios. Geekbench 3 makes it easier than ever to find out if your computer is up to speed.

There is also a new 1.66 version of CPU-Z, that has support for some upcoming processors, such as the Intel Xeon E5-2600 v2 series (Ivy Bridge-EP).

image thumb New Versions of Useful Tools for SQL Server

SQL Server 2012 Diagnostic Information Queries for August 2013

I have gone through and made some minor updates and bug fixes for all of my SQL Server 2012 Diagnostic Information Queries for this month, along with adding two new queries at the end of the script. The links for the queries and the blank results spreadsheet are below.

SQL Server 2012 Diagnostic Information Queries                SQL Server 2012 Blank Results

About half of the queries are instance specific and about half are database specific, so you will want to make sure you are connected to a database that you are concerned about instead of the master, system database.

The idea is that you would run each query in the set, one at a time (after reading the directions). You need to click on the top left square of the results grid to select all of the results, and then right-click and select “Copy with Headers” to copy all of the results, including the column headers to the Windows clipboard. Then you paste the results into the matching tab in the spreadsheet.

There are also some comments on how to interpret the results after each query. Please let me know what you think of these queries.

Experiments with SQL Server VLFs, Part 1

Since SQL Server 2012 is relatively easy to install and get running, using just the default configuration settings, I thought it would be interesting to explore the negative performance effects of some of these default configuration settings, using a simple, repeatable test.

My test machine is a desktop machine with an Intel Core i7-3770K processor with 32GB of RAM, and two fast, consumer-level SSDs. The C: drive is a 512GB OCZ Vertex 4, and the L: drive is a faster, 256GB Samsung 840 Pro. Both SSDs are plugged into 6Gbps SATA III ports, and they are performing as I would expect.

For my test database, I downloaded a copy of the 2008 Credit database and restored it to a SQL Server 2012 SP1 CU5 instance. This database has a table called charge, that has 1.6 million rows of data. The database starts out with a 1000MB data file, and a 400MB log file with 15 VLFs. The database is in the SIMPLE recovery model, and both the data and log files have their autogrowth set to 10% (which is not the best choice). I used the MOVE command to place the SQL Server log file for the database on my separate, faster L: drive, so I would get the fastest possible sequential write performance for the log file.

To make this experiment more interesting, I changed the recovery model to FULL, and I changed the autogrowth for both the data and log files to 1MB (which is a really bad idea). Finally, I took a compressed, full database backup to have a SQL Server 2012 baseline for the experiments, as shown in Listing 1:

Initial Preparation of the Credit Database

USE [master];

-- Start of initial preparation
-- Restore from the original SQL Server 2008 backup with MOVE if necessary
-- This will upgrade it to SQL Server 2012 format
RESTORE DATABASE [Credit] 
FROM  DISK = N'C:\SQLBackups\CreditBackup100.bak' WITH  FILE = 1,  
MOVE N'CreditData' TO N'C:\SQLData\CreditData.mdf',  
MOVE N'CreditLog' TO N'L:\SQLLogs\CreditLog.ldf',  NOUNLOAD,  STATS = 1;
GO

-- Change compatibility level to 110 (SQL Server 2012)
-- This is different from the database format
ALTER DATABASE [Credit] SET COMPATIBILITY_LEVEL = 110;
GO
-- Change recovery model to FULL
ALTER DATABASE [Credit] SET RECOVERY FULL WITH NO_WAIT;
GO
-- Change file growth for data file to 1 MB (This is a bad thing to do!)
ALTER DATABASE [Credit] MODIFY FILE (NAME = N'CreditData', FILEGROWTH = 1MB);
GO
-- Change file growth for data file to 1 MB (This is a bad thing to do!)
ALTER DATABASE [Credit] MODIFY FILE (NAME = N'CreditLog', FILEGROWTH = 1MB)
GO

-- Take a compressed full backup for a baseline
BACKUP DATABASE [Credit] 
TO  DISK = N'C:\SQLBackups\CreditBackup110.bak' WITH NOFORMAT, INIT,  
NAME = N'Credit-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 1;
GO
-- End of initial preparation

Listing 1: Initial Preparation and Backup of Baseline Database

Standard Test Details

After the initial preparation, I ran a test, where I doubled the number of rows in the charge table every time I went through a loop, with a simple INSERT…SELECT pattern, also incrementing the charge_dt to a later date for each new set of charges. I added some timing information, and gathered some information about cumulative waits, VLFs, and file sizes for each test run. I dropped and then restored the baseline SQL Server 2012 Credit database before each test run. I also restarted the SQL Server Service before each test run after I made my configuration changes. Listing 2 shows the standard test loop that I ran for each configuration. Listing 3 shows the cleanup I did after each test was done.

-- Start Standard Test
USE Credit;
GO

-- Supress done messages
SET NOCOUNT ON;

-- Get starting row count and size of Charge table
EXEC sp_spaceused N'dbo.charge';

-- Get intial VLF count
DBCC LogInfo;

-- Individual File Sizes and space available for current database  
SELECT f.name AS [File Name] , f.physical_name AS [Physical Name], 
CAST((f.size/128.0) AS decimal(15,2)) AS [Total Size in MB],
CAST(f.size/128.0 - CAST(FILEPROPERTY(f.name, 'SpaceUsed') AS int)/128.0 AS decimal(15,2)) 
AS [Available Space In MB], [file_id], fg.name AS [Filegroup Name]
FROM sys.database_files AS f WITH (NOLOCK) 
LEFT OUTER JOIN sys.data_spaces AS fg WITH (NOLOCK) 
ON f.data_space_id = fg.data_space_id OPTION (RECOMPILE);

-- Clear Wait Stats 
DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR);

-- Declare and initialize some variables
DECLARE @LoopCount int = 1;
DECLARE @DateDiff int = 30;
DECLARE @Start datetime = GETDATE();
DECLARE @LoopDone datetime;
DECLARE @End datetime;

-- Double the size of the charge table each time, changing charge_dt to a higher value each loop
WHILE @LoopCount < 6
    BEGIN
        INSERT INTO dbo.charge 
        (member_no, provider_no, category_no, charge_dt, charge_amt, statement_no, charge_code)
        SELECT member_no, provider_no, category_no, charge_dt + @DateDiff, charge_amt, statement_no, charge_code
        FROM dbo.charge;

        -- Do some timing and housekeeping
        SET @LoopDone = GETDATE();
        PRINT N'Loop ' + CONVERT(NVARCHAR(5), @LoopCount) + N' done in ' + CONVERT(NVARCHAR(5), DATEDIFF(second, @Start, @LoopDone)) + N' seconds';
        SET @DateDiff += 30;
        SET @LoopCount += 1;
    END

SET @End = GETDATE();

SELECT DATEDIFF(second, @Start, @End) AS [Total Elapsed Time (sec)];

-- Isolate top waits for server instance since last restart or statistics clear 
-- SQL Server 2012 and newer specific version
WITH Waits
AS (SELECT wait_type, CAST(wait_time_ms / 1000. AS DECIMAL(12, 2)) AS [wait_time_s],
    CAST(100. * wait_time_ms / SUM(wait_time_ms) OVER () AS decimal(12,2)) AS [pct],
    ROW_NUMBER() OVER (ORDER BY wait_time_ms DESC) AS rn
    FROM sys.dm_os_wait_stats WITH (NOLOCK)
    WHERE wait_type NOT IN (N'CLR_SEMAPHORE', N'LAZYWRITER_SLEEP', N'RESOURCE_QUEUE',N'SLEEP_TASK',
                            N'SLEEP_SYSTEMTASK', N'SQLTRACE_BUFFER_FLUSH', N'WAITFOR', N'LOGMGR_QUEUE',
                            N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH', N'XE_TIMER_EVENT',
                            N'BROKER_TO_FLUSH', N'BROKER_TASK_STOP', N'CLR_MANUAL_EVENT', N'CLR_AUTO_EVENT',
                            N'DISPATCHER_QUEUE_SEMAPHORE' ,N'FT_IFTS_SCHEDULER_IDLE_WAIT', N'XE_DISPATCHER_WAIT',
                            N'XE_DISPATCHER_JOIN', N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', N'ONDEMAND_TASK_QUEUE',
                            N'BROKER_EVENTHANDLER', N'SLEEP_BPOOL_FLUSH', N'SLEEP_DBSTARTUP', N'DIRTY_PAGE_POLL',
                            N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',N'SP_SERVER_DIAGNOSTICS_SLEEP')),
Running_Waits 
AS (SELECT W1.wait_type, wait_time_s, pct,
    SUM(pct) OVER(ORDER BY pct DESC ROWS UNBOUNDED PRECEDING) AS [running_pct]
    FROM Waits AS W1)
SELECT wait_type, wait_time_s, pct, running_pct
FROM Running_Waits
WHERE running_pct - pct <= 99
ORDER BY running_pct
OPTION (RECOMPILE);

-- Get ending VLF count of current database
DBCC LogInfo;

-- Individual File Sizes and space available for current database  
SELECT f.name AS [File Name] , f.physical_name AS [Physical Name], 
CAST((f.size/128.0) AS decimal(15,2)) AS [Total Size in MB],
CAST(f.size/128.0 - CAST(FILEPROPERTY(f.name, 'SpaceUsed') AS int)/128.0 AS decimal(15,2)) 
AS [Available Space In MB], [file_id], fg.name AS [Filegroup Name]
FROM sys.database_files AS f WITH (NOLOCK) 
LEFT OUTER JOIN sys.data_spaces AS fg WITH (NOLOCK) 
ON f.data_space_id = fg.data_space_id OPTION (RECOMPILE);

-- Get ending row count and size of Charge table
EXEC sp_spaceused N'dbo.charge';
GO
-- End Standard Test

Listing 2: Standard Test

-- Cleanup after each test run
-- Drop Credit database
USE [master]
GO
ALTER DATABASE [Credit] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO
DROP DATABASE [Credit];
GO

-- Restore from our initial baseline SQL Server 2012 full backup
RESTORE DATABASE [Credit] 
FROM  DISK = N'C:\SQLBackups\CreditBackup110.bak' WITH  FILE = 1,  
MOVE N'CreditData' TO N'C:\SQLData\CreditData.mdf',  
MOVE N'CreditLog' TO N'L:\SQLLogs\CreditLog.ldf',  NOUNLOAD,  STATS = 1;
GO

-- Restart SQL Server Service

-- End of Cleanup

Listing 3: Cleanup After Each Test Run

After all of this preparation, what did I actually want to test? Each test would measure the elapsed time and ending Virtual Log File (VLF) count after inserting 49.4 million rows into the credit table, in ever larger batches of INSERTs. I also wanted to see what the top cumulative wait statistics were for each test run.

Test Configurations and Results

Test 1

Disable Windows Instant File Initialization (IFI), which is the default condition when you install SQL Server, unless you choose to enable it. I used the very bad autogrowth value of 1MB for both the data file and the log file of the database. This is a very bad combination that you should not emulate! I expected this configuration to perform relatively poorly, and I was not surprised by the results. This test took 1074 seconds to complete, and the VLF count went up to 126,399. Table 1 shows the top cumulative wait types during this test run. The PREEMPTIVE_OS_xxx waits are related to the fact that we have such a small file autogrowth increment and that Windows Instant File Initialization is not enabled.

 

Wait Type Percent Running Percent
CXPACKET 72.22 72.22
PREEMPTIVE_OS_FLUSHFILEBUFFERS 13.01 85.23
PREEMPTIVE_OS_WRITEFILEGATHER 6.52 91.75
WRITELOG 2.13 93.88
PREEMPTIVE_OS_FILEOPS 1.39 95.27
PAGEIOLATCH_EX 0.97 96.24
ASYNC_IO_COMPLETION 0.82 97.06
WRITE_COMPLETION 0.71 97.77
IO_COMPLETION 0.67 98.44
LATCH_SH 0.62 99.06

Table 1: Test 1 Cumulative Wait Types

Test 2

Enable Windows Instant File Initialization (making sure to restart the SQL Server Service). I used the same bad autogrowth value of 1MB for both the data file and the log file of the database.  I expected this configuration to still perform relatively poorly. This test took 1069 seconds to complete, and the VLF count went up to 126,399. The lower elapsed time was really within the margin of error, so just enabling Windows Instant File Initialization was not really helping in a meaningful way (by itself). Keep in mind that IFI only affects the data file, not the log file. Table 2 shows the top cumulative wait types during this test run. The cumulative wait types during Test 2 did not really change in a meaningful way.

Wait Type Percent Running Percent
CXPACKET 72.88 72.88
PREEMPTIVE_OS_FLUSHFILEBUFFERS 13.34 86.22
PREEMPTIVE_OS_WRITEFILEGATHER 6.4 92.62
PREEMPTIVE_OS_FILEOPS 1.39 94.01
WRITELOG 1.12 95.13
WRITE_COMPLETION 1.01 96.14
PAGEIOLATCH_EX 0.87 97.01
ASYNCH_IO_COMPLETION 0.83 97.84
LATCH_SH 0.67 98.51
IO_COMPLETION 0.65 99.16

Table 2: Test 2 Cumulative Wait Types

Test 3

Enable Windows Instant File Initialization (making sure to restart the SQL Server Service). I changed the autogrowth value to a more reasonable 1024MB for both the data file and the log file of the database.  I expected this configuration to perform much better, with a reduced VLF count in the log file. This test took 771 seconds to complete, and the VLF count only went up to 246, which was much better. Table 3 shows the top cumulative wait types during this test run. We can see that PREEMPTIVE_OS_FLUSHFILEBUFFERS drops completely off the list when we have much fewer file growths during the test.

Wait Type Percent Running Percent
CXPACKET 88.95 88.95
PREEMPTIVE_OS_WRITEFILEGATHER 7.48 96.43
ASYNCH_IO_COMPLETION 1.11 97.54
PAGEIOLATCH_EX 0.89 98.43
LATCH_SH 0.88 99.31

Table 3: Test 3 Cumulative Wait Types

Test 4

Enable Windows Instant File Initialization (making sure to restart the SQL Server Service). I changed the autogrowth value to a more reasonable 1024MB for both the data file and the log file of the database.  I also pre-grew the log file to 32000MB, in 4000MB increments (to reduce the VLF count and eliminate any log file autogrowths during the test). I expected this configuration to perform a little better than Test 3, with a reduced VLF count in the log file. This test took 711 seconds to complete, and the VLF count only went up to 127, which was even better. Table 4 shows the top cumulative wait types during this test run. Notice that the PREEMPTIVE_OS_WRITEFILEGATHER wait type is greatly reduced by this latest configuration, since we are only seeing a few file growths for the data file and none for the log file.

Wait Type Percent Running Percent
CXPACKET 94.74 94.74
PREEMPTIVE_OS_WRITEFILEGATHER 1.57 96.31
ASYNCH_IO_COMPLETION 1.13 97.44
PAGEIOLATCH_EX 1.12 98.56
LATCH_SH 0.94 99.50

Table 4: Test 4 Cumulative Wait Types

 

Test 5

Test 5 used the same configuration steps as Test 4, except that I changed the instance-level MAXDOP setting to 4 (since I had a single, quad-core processor with hyper-threading enabled). This change had no measurable effect, taking 716 seconds to complete, and the VLF count stayed at 127. Since there is only one NUMA node on my desktop machine, I did not expect any improvement here. Table 5 shows the top cumulative wait types during this test run.

Wait Type Percent Running Percent
CXPACKET 92.69 92.69
PREEMPTIVE_OS_WRITEFILEGATHER 2.71 95.40
ASYNCH_IO_COMPLETION 1.89 97.29
PAGEIOLATCH_EX 1.68 98.97
LATCH_SH 0.63 99.60

Table 5: Test 5 Cumulative Wait Types

Test 6

Test 6 used the same configuration steps as Test 5, except that I changed the tempdb configuration by adding three additional tempdb data files and I made all of the tempdb data files the same initial size (2048MB) with an autogrowth increment of 1024MB. I also made the tempdb log file a larger 1024MB initial size, with an autogrowth increment of 1024MB. This change also had no measurable effect, taking 723 seconds to complete, with the VLF count staying at 127. This was somewhat of a surprise, since I could see the original tempdb data file being hit pretty hard during the test runs. Table 6 shows the top cumulative wait types during this test run.

Wait Type Percent Running Percent
CXPACKET 94.07 94.07
ASYNCH_IO_COMPLETION 1.96 96.03
PREEMPTIVE_OS_WRITEFILEGATHER 1.96 97.99
PAGEIOLATCH_EX 1.83 99.82

Table 6: Test 6 Cumulative Wait Types

Part One Conclusions

We can see from these tests that it is very beneficial to make sure that you use a reasonable autogrowth increment size for both your data file(s) and log file rather than an extremely small autogrowth increment size. It is also very beneficial to manually pre-grow your log file to an appropriate size, in 4000MB increments so that it does not have to autogrow during normal and maintenance operations. This will also keep your VLF count under control.

It is also beneficial to enable Windows Instant File Initialization (IFI), even though it only affects your SQL Server data file(s). More tests in Part 2 will show some of the other advantages of IFI for SQL Server. You can get the complete script and some more detailed results here. Table 7 shows the overall test results for Part 1. Keep in mind that this testing is using relatively high performance, consumer-level SSDs, so the negative effects of lots of small file growths in the data and log files are not as severe as they might otherwise be with magnetic disk storage.

Test Run Elapsed Time VLF Count
Test  1 1074 seconds 126,399
Test 2 1069 seconds 126,399
Test 3 771 seconds 246
Test 4 711 seconds 127
Test 5 716 seconds 127
Test 6 723 seconds 127

Table 7: Part One Test Results

Building an Intel Haswell Desktop System for SQL Server 2012 Development

The 22nm 4th generation Intel Core desktop processors (aka Haswell) have been available for sale for a couple of months now, so I have had some requests to put together a suggested component list for a desktop system that would be suitable for SQL Server 2012 development and testing usage. As you may know, the desktop Haswell processors use Socket 1150, which is incompatible with the older Socket 1155 that was used in the previous desktop 32nm Sandy Bridge and 22nm Ivy Bridge processors.

That means you will have to get a new motherboard to use a Haswell processor. You should get a Z87 chipset motherboard so that you can get a nice feature set with the motherboard. You want to make sure you get a motherboard that has four memory slots, instead of two, so you can have 32GB of RAM. You also want to pay attention to how many total SATA III ports you are getting. The Z87 chipset natively supports six SATA III ports (with hardware RAID support), and many motherboards will have an extra Marvell controller that can support either two or four more SATA III ports (with hardware RAID support). Having SATA III support is vital for modern SATA III solid state drives.

This system will have an Intel Core i7-4770K quad-core  processor (plus hyper-threading), and 16GB of RAM (which can be expanded to 32GB of RAM for about $129.00 more). You could also back down to an Intel Core i5-4670K processor, which is a quad-core without hyper-threading. The Core i5 has a slightly lower base and Turbo clock speed and a smaller 6MB L3 cache compared to the Core i7, but it is $80.00 less at Micro Center. Both of these processors have Intel HD Graphics 4600 integrated graphics, which save you the cost and extra power usage of getting a discrete video card. You would probably have to spend about $100.00 to get a discrete video card that has better performance than the Intel HD Graphics 4600 integrated graphics, and I just don’t think you will need to do that for normal desktop usage.

This system will have better CPU and memory performance than many older production database servers (although you are limited to 32GB of RAM). Depending on what you want to do with this system, you may need or want to add additional SATA III SSDs or conventional hard drives. If you skip the optical drive, you can add seven more drives to this system before you run out of drive bays and SATA III ports. With enough fast SSDs, you may have better I/O performance (under a light load) than many production database servers. On the other hand, you won’t have redundant components (such as dual power supplies) like you would have with a rack-mounted database server. You will also have SATA consumer-level SSDs that cannot handle a heavy server workload with consistent performance as well as expensive enterprise-level SAS SSDs.

Below, I have links to the manufacturer information about each component, along with links to the components at Micro Center and NewEgg.

Motherboard               Gigabyte GA-Z87X-UD4H                              Micro Center                                   NewEgg

This Gigabyte motherboard has a total of eight SATA III slots between two separate controllers, with hardware RAID support (with no cache memory). In my experience, Gigabyte motherboards typically let you install server operating systems (including Windows Server 2008 R2 or Windows Server 2012) without any driver issues. Another alternative would be Windows 8 Professional, with Hyper-V, so you can run Windows Server 2012 in VMs. Micro Center is currently selling these for an insanely low sale price of $114.99, plus you get $40.00 more off the motherboard when you buy it with an eligible processor. Update: This motherboard actually uses an Intel NIC, that Intel (in their infinite wisdom) does not allow you to install the NIC drivers on a server operating system, such as Windows Server 2012.

Processor                    Intel Core i7-4770K                                      Micro Center                                   NewEgg

This is the “top of the line” Haswell desktop processor, with an unlocked multiplier. It is also the main Core i7 processor that Micro Center carries and is eligible for their $40.00 motherboard/processor bundle discount. It does support VT-x with Extended Page Tables for hardware virtualization support, but it does not support VT-d for directed IO with virtualization. If you are really concerned about VT-d, you can always get an Intel Core i7-3770 (that does have VT-d) from NewEgg or Micro Center. It will cost $309.99 at New Egg or $249.99 at Micro Center, and you would not get the motherboard bundle discount from Micro Center. I would say that with a decent number of good SSDs, you are much less likely to have any I/O bottlenecks with virtualization.

Power Supply              Corsair CX500M                                           Micro Center                                   NewEgg

This is a high-quality, modular power supply that can easily support a system like this. It has an 80 Plus Bronze efficiency rating, which is pretty good. A lower wattage power supply is more efficient at lower output levels than a higher wattage power supply, so a power supply like this will save you money over time and be less expensive to buy.

Case                            Fractal Design Define R4                               Micro Center                                   NewEgg

These Fractal Design cases get universally excellent reviews, and they are very easy to work on when you are building the system, with excellent cable management features. They are also very quiet, with sound deadening foam inside. This case has eight 2.5”/3.5” drive bays and two USB 3.0 ports on the top front of the case. It does not have any silly gaming features.

Memory                       Crucial Ballistix Sport 16GB DDR3-1600         Micro Center                                   NewEgg

This is pretty decent memory that is eligible for the $10.00 bundle discount from Micro Center when you buy it with a motherboard. NewEgg’s price is actually a little cheaper on this one item. You could also spend more money on higher speed memory, which you may or may notice that much benefit from in real life.

System Drive               256GB Samsung 840 Pro SSD                       Micro Center                                   NewEgg

These are one of the top consumer SSDs available right now, with lots of good reviews. I have bought a number of these and they are very fast. They are also eligible for a $20.00 bundle discount from Micro Center when you buy them with a motherboard or processor.

Optical Drive               24X LG DVDRW  OEM                                   Micro Center                                   NewEgg

I still like to have an optical drive, even though I rarely use it. If you have an external USB optical drive, you can use that to install the OS, or you could use a thumb drive.

As you can see below, if you are lucky enough to live near a Micro Center, you can save a significant amount of money by getting all of these components from Micro Center instead of NewEgg. You will have to pay sales tax at Micro Center, while you probably won’t at NewEgg. Most of the components (except the case) have free shipping from NewEgg.

Item Model Micro Center Price NewEgg Price
Motherboard Gigabyte GA-Z87X-UD4H 114.99 – 40.00 Bundle 189.99 w/FS
Processor Intel Core i7-4770K 279.99 339.99 w/FS
Power Supply Corsair CX500M 59.99 – 10.00 MIR 69.99 – 10.00 MIR
Case Fractal Design Define R4 89.99 99.99 + 9.99 Ship
Memory Crucial Ballistix Sport 129.99 – 10.00 Bundle 115.99 w/FS
System Drive 256GB Samsung 840 Pro 239.99 – 20.00 Bundle 239.99 w/FS
Optical Drive 24X LG DVDRW  OEM 15.99 17.99 w/FS
Total 850.93 1073.92

I’ll also have a post up in the near future that talks about how to build an Intel Sandy Bridge-E or Intel Ivy Bridge-E system, that can have six-cores (plus hyper-threading) and 64GB of RAM. One of those systems will be considerably more expensive, due to a more expensive motherboard, more expensive processor, and more RAM.

24 Hours of PASS: Summit Preview

I had the opportunity to present a preview session of my upcoming day-long pre con, Scaling SQL Server 2012 for the last slot of the 24 Hours of PASS: Summit Preview starting at 5:00AM Mountain Time on August 1. I think the session went pretty well, after some initial technical difficulties on my part. The session recording will be available in the next couple of weeks. I’ll update this post with the recording link once its available. In the meantime, you can download my deck here.

I also got to do a virtual session for the PASS Performance Virtual Chapter on July 25. The session recording is here. You can download my deck here.

Just in case these sessions whetted your appetite for a much deeper level of knowledge about SQL Server Hardware and how to scale SQL Server 2012, you should consider attending the upcoming two-day IEHW: Immersion Event on SQL Server Hardware course that I will be teaching in Bellevue, WA on September 18-19, 2013.

Updates on Intel Xeon Ivy Bridge Server Processor Launch Schedules

It still looks like the 2nd generation, 22nm Intel Xeon E5-1600 v2 and E5-2600 v2 families (Ivy Bridge-EP), for single and two-socket servers, will launch sometime this quarter. Some sources I have talked to think that Intel may wait until the Intel IDF13 in San Francisco, on September 10-12 to release the Ivy Bridge-EP. These processors use the same Socket 2011, as the current 32nm Intel Xeon E5-1600 and E5-2600 families (Sandy Bridge-EP) processors, so current model servers from your favorite server vendor should be able to use these processors, perhaps with a BIOS update.

In most cases you would not actually want to do a processor upgrade on an existing server for economic reasons, but you could if you wanted to. Having socket and chipset compatibility just means that the server vendors will be able to offer the new processor as soon as they get a supply of them from Intel.

The E5-2600 v2 series is aimed at two-socket servers, and will have at least 18 different “Ivy Bridge-EP” SKUs, ranging from the entry-level E5-2603 v2 up to the twelve-core E5-2697 v2. The 22nm processors have up to 12 physical cores, which allows them to have 24 logical cores with hyper-threading enabled.  The second series, the E5-1600 v2, works only in single-socket systems, and is going to initially have three models, the E5-1620 v2, E5-1650 v2, and E5-1660 v2. Both families will work with the Intel C600 series chipsets, and both use Socket 2011.

The second generation 22nm Intel Xeon E7 family, (which includes the Xeon E7-2800 v2, E7-4800, and E7-8800 v2) is aimed at larger, multi-processor servers, and it will be delayed until at least Q1 2014. Previous reports indicated these Ivy Bridge-EX processors would be available in Q4 2013. These processors will have triple the memory capacity of the current 32nm Westmere-EX processors, and they will require new server models from the server vendors. They will also finally have PCI-E 3.0 support, so overall they will be a huge upgrade from the current Westmere-EX.

Also in Q1 2014, Intel is going to release the 22nm Xeon E5-4600 v2 and E5-2400 v2 processors. The E5-4600 v2 will work in four-socket servers, and they use Socket 2011. The E5-2400 v2 (Ivy Bridge-EN) are for two-socket servers, have up to 10 cores, and will use Socket 1356. These Ivy Bridge-EN processors will NOT a good choice for SQL Server 2012 and SQL Server 2014 OLTP workloads compared to higher performance Ivy Bridge-EP processors.

I really hate to see Intel slip their release schedule like this. I think a big part of why this happened is due to a lack of viable competition from AMD. After all, why should Intel rush to push out new technology when they are absolutely dominant from a performance perspective? They can continue to sell the current processors for a little longer with no real consequences.

Given the apparent delay for the Ivy Bridge-EX, a twelve-core Ivy Bridge-EP processor will be the hot ticket for a lot of people who are looking at new database servers over the next six to nine months. A new two-socket Ivy Bridge-EP system with Windows Server 2012 R2 and SQL Server 2014 will be the way to go in the near future.

Upcoming Speaking Events

On July 25, I will be presenting for the PASS Performance Virtual Chapter at 12PM Mountain Time. I will be giving my Hardware 201 presentation. Here is the abstract:

Hardware 201: Selecting and Sizing Database Hardware for OLTP Performance

The foundation of database performance is the underlying server hardware and storage subsystem. Even the best designed and optimized database application can be crippled by an inadequate hardware and storage infrastructure. Recent advances in new processors and chipsets, along with improvements in magnetic and SSD storage have dramatically changed the evaluation and selection process compared to the past. Many database professionals struggle to keep up with new technology and often simply let someone else make their hardware selection and sizing decisions. Unfortunately, the DBA usually gets the blame for any performance issues that crop up later. Don’t let this happen to you! This session covers current and upcoming hardware from both Intel and AMD and gives you the tools and resources to make better hardware selection decisions to support SQL Server OLTP workloads.

You can register for the event here.

Next, early in the morning on August 1, I will be presenting for the latest 24 Hours of PASS event. I have the last session of the entire event, starting at 5AM Mountain Time. Despite that, hopefully I will have some people from Europe and Asia who will be able to attend. Here is the abstract:

Scaling SQL Server 2012

SQL Server implementations can rapidly evolve and become more complex, forcing DBAs and developers to think about how they can scale their solution quickly and effectively. Scaling up is relatively easy but can be expensive, while scaling out requires significant engineering time and effort. As your database server nears its load capacity, what can you do? This 24 Hours of PASS session, a preview of the full-day PASS Summit pre-conference session, will give you concrete, practical advice about how to scale SQL Server. We’ll explore how to find and alleviate bottlenecks and cover how you can decide whether you should scale up or scale out your data tier. We’ll also cover how to use middle-tier caching and other application techniques to increase your overall scalability.

You can register for 24HOP sessions here.