This month the SQLskills team is presenting a series of blog posts aimed at helping Accidental/Junior DBAs ‘keep the SQL Server lights on’. It’s a little taster to let you know what we cover in our Immersion Event for The Accidental/Junior DBA, which we present several times each year. If you know someone who would benefit from this class, refer them and earn a $50 Amazon gift card – see class pages for details. You can find all the other posts in this series at http://www.SQLskills.com/help/AccidentalDBA. Enjoy!

I usually get two questions whenever I talk about hardware at a SQL Server event. The first one is always about virtualization, while the second is usually about Solid State Drives (SSDs) and how they should be used with SQL Server. I am often asked which components of a SQL Server database should be moved to flash-based storage as it becomes more affordable. Unfortunately, the answer is that it depends on your workload, and on where (if anywhere) you are experiencing I/O bottlenecks in your system, whether it is on your SQL Server data files, log files, or tempdb files.

Traditional magnetic spinning storage (a hard drive) does relatively well with sequential read and write operations. A single, 15K rpm Serial-Attached Storage (SAS) drive can do about 150-200MB/sec of sequential throughput. Where traditional hard drives have more issues is with random input/output operations, which is measured as Input Output Operations per Second (IOPS). Since a traditional hard drive is an electro-mechanical device, with a moving actuator arm that has to move the drive heads over a spinning disk platter to find and then access the data you need, you are dealing with much higher latency than you see with solid-state storage that has no moving parts. Because of this, a single, 15K rpm Serial-Attached Storage (SAS) drive can only do about 150-200 IOPS.

In contrast, a single 6Gbps SATA or SAS solid-state drive can do about 550MB/sec for sequential throughput and about 100,000 IOPS for random read/write operations. If that is not impressive enough, there are flash-based, high-end PCI-E storage devices that can do up to 6GB/sec for sequential throughput and about 1,000,000 IOPS for random read/write operations. There are also more affordable flash-based, PCI-E storage devices that can do up to 2GB/sec for sequential throughput and about 200,000 IOPS for random read/write operations.

Flash-based storage has become much more affordable and much more reliable over the past couple of years. There are some entry-level, Enterprise-class flash-based storage devices from Intel, such as the Intel DC S3700 line of SATA SSDs and the Intel 910 series PCI-E storage card line, that make it much more feasible to start moving more of your database infrastructure to solid-state storage.

Depending on your database size and your budget, it may make a lot of sense to move an entire user database (data files and log file) to solid-state storage, especially with a heavy OLTP workload. If you have multiple user databases running on a single instance of SQL Server, your I/O workload will become more randomized (with lots of random reads and writes), which means that you will see even more of a benefit from solid-state storage, which excels at random I/O operations.

If you don’t have enough space available to move all of your user database files to solid-state storage, you will need to be more selective about what types of files you move to solid-state storage. You will want to think about what type of I/O workload you have (which is related to your overall workload type), and which logical drives and which specific database files are seeing I/O bottlenecks.

For example, if you have multiple OLTP databases on the same instance of SQL Server, and they all have their log file on the same logical drive, that drive will be dealing with a highly random I/O workload. Moving those log files to solid-state storage could be an excellent solution to improve your I/O performance. Another example might be where you have very heavy tempdb usage, and you are seeing very high read and write latency for your tempdb data files (as opposed to allocation contention from a single tempdb data file). This would be another case where moving your tempdb data files to solid-state storage could be very beneficial.

Erin and Jonathan will be covering this topic in more detail in their IE0: Immersion Event for the Accidental DBA class in September. I will also be discussing it in great detail in the IEHW: Immersion Event on SQL Server Hardware class in September. You can get a head start on all of this by watching my Pluralsight course SQL Server 2012: Evaluating and Sizing Hardware.