The post Samsung Portable SSD X5 Review appeared first on Glenn Berry.
]]>From the exploded view in Figure 1, it appears that you might be able to disassemble the Samsung enclosure and swap in your own M.2 NVMe drive (which I am sure would void your warranty). This would let you put in any M.2 NVMe SSD that you wanted. I am not 100% sure this is possible though.
Figure 1: Exploded View of Samsung Portable SSD X5
You will also need a machine with a Thunderbolt 3 port, preferably with PCIe 3.0 x4 bandwidth so that you get the full performance that the drive can deliver. Figure 2 shows the CrystalDiskMark results for this drive in my recent HP Spectre x360 13 AP0023DX laptop, which has an TB3 PCIe 3.0 x4 port.
Figure 2: 500GB Samsung Portable SSD X5 in TB3 PCIe 3.0 x4 port
With Windows 10 version 1809 or later, it is also very important that you set the write-caching policy to what you want it to be for that drive. The new default for external drives is Quick removal, which is safer, but disables write caching in Windows. If you want better write performance, you should enable write caching for the drive as you see in Figure 3.
Figure 3: Windows 10 Write-Caching Policy
Another important factor is exactly what type of Thunderbolt 3 port and PCIe 3.0 interface you have in your laptop or desktop machine. I have a two-year old Dell Precision 5520 laptop that only has a PCIe 3.0 x2 interface for its USB-C Thunderbolt 3 port. This effectively cuts your maximum sequential performance in half compared to a PCIe 3.0 x4 interface. You can see these results in Figure 4.
Figure 4: Performance Effect of PCIe 3.0 x4 Interface
Figure 5 shows the CrystalDiskMark results for a 1TB Samsung 970 EVO Plus M.2 NVMe drive in my HP Spectre x360 laptop. That drive is an incredible value right now, giving great performance for less than $250.00. Flash NAND SSD prices have been in steep decline over the past year. I vividly remember paying $620.00 for a 1TB Samsung 960 PRO M.2 NVMe drive in November 2017.
Figure 5: 1TB Samsung 970 EVO Plus M.2 NVMe SSD in HP Spectre x360
Figure 6: Samsung Portable SSD X5
This drive is still somewhat pricey, and it does get warm under a heavy load, which happens with all M.2 drives. The built-in heatsink in the enclosure should help with that, compared to an M.2 drive inside a laptop.
Still, if you want TB3 level performance from an external drive and you have a new enough machine to support it, it is nice solution.
The post Samsung Portable SSD X5 Review appeared first on Glenn Berry.
]]>The post Initial CrystalDiskMark Results for Intel Optane 900p appeared first on Glenn Berry.
]]>I also have a couple of 1TB Samsung 960 PRO M.2 NVMe cards in this machine, so I thought I would run a couple of quick CrystalDiskMark tests on the two drives. One thing to keep in mind is that CrystalDiskMark is not the best synthetic benchmark to use to show off the strengths of the Optane 900p.
Traditional NAND-based SSDs excel at very high queue depths that are not usually encountered outside of synthetic benchmarks (especially for random read performance). Optane 900p SSDs perform extremely well for random reads at low queue depths. This gives you outstanding responsiveness and performance where it is going to be most noticeable in daily usage.
You can see part of this effect in the bottom row of CDM test results for reads, where the Optane 900p is doing about 4.3X more 4K IOPS than the Samsung 960 PRO at a queue depth of 1. A better test for this will be Microsoft DiskSpd, which can also measure the latency during the test run.
Here are some of the primary advantages of the Intel Optane 900p compared to current NAND flash storage.
Figure 1: 1TB Samsung 960 PRO with Samsung NVMe driver
Figure 2: 480GB Intel Optane 900p with Intel NVMe driver
The post Initial CrystalDiskMark Results for Intel Optane 900p appeared first on Glenn Berry.
]]>The post Two Recent Laptops Compared appeared first on Glenn Berry.
]]>The newer machine is an HP Spectre x360 13-w023dx, which has a 14nm Intel Core i7-7500U Kaby Lake-U processor, 16GB of RAM, a 512GB Samsung SM961 M.2 NVMe SSD, one USB 3.0 port, two USB-C Thunderbolt 3 ports and a 1080P touch display.
The high-level processor specifications and CPU-Z benchmark results for these two systems are shown below:
Processor Base Clock Turbo Clock Single-threaded CPU Multi-threaded CPU
Intel Core i7-6500U 2.5GHz 3.1GHz 1467 3391
Intel Core i7-7500U 2.7GHz 3.5GHz 1743 3958
These Skylake-U and Kaby Lake-U processors are quite similar, with the Kaby Lake having an optimized “14nm plus” process technology that lets Intel set the clock speeds slightly higher at the same power usage levels. Kaby Lake also has improved integrated graphics and an improved version of Intel Speed Shift technology that lets Windows 10 throttle up the clock speed of the processor cores even faster than with a Skylake processor.

Figure 1: Improved Intel Speed Shift in Kaby Lake
The single-threaded CPU-Z 1.78.1 benchmark result is 18.8% higher with the new system, while the multi-threaded CPU-Z benchmark result is 16.7% higher on the new system. I attribute this increase to the higher base and turbo clock speeds, the optimized process technology, and the effect of the improved Intel Speed Shift. The results are shown in figures 2 and 3.
Figure 2: Intel Core i7-6500U CPU-Z Benchmark Results
Figure 3: Intel Core i7-7500U CPU-Z Benchmark Results
Honestly, these current generational CPU performance improvements are slightly better than nothing (but not much), and are certainly not a good enough reason to upgrade from an equivalent Skylake-U system to a Kaby Lake-U system. Where we see a big improvement is with basic storage performance and peripheral connectivity between these two systems.
I was happily surprised that the new HP system came a very fast 512GB Samsung SM961 M.2 NVMe OEM SSD that is equivalent to a Samsung 960 PRO. The reason I was surprised was because some reviews I had read indicated that these HP machines had a much slower Samsung OEM M.2 NVMe SSD. This probably varies by when your machine was manufactured, so perhaps the earliest review machines had the older, slower drives.
As you can see, the difference in the CrystalDiskMark performance between these drives is pretty dramatic.
Figure 4: 512GB Samsung SM961 M.2 NVMe SSD
Figure 5: 512GB Samsung PM871 SATA 3 SSD
For day to day average PC usage, you probably won’t really notice the difference between a fast SATA 3 SSD and an M.2 PCIe NVMe SSD, but if you are using SQL Server on a laptop, having that extra sequential bandwidth and much better random I/O performance is really noticeable. It is also very nice to have Thunderbolt 3 support, which will allow you to have really fast transfer performance to an appropriate external drive.
So the moral of all this is that the best reason to consider upgrading to a new laptop or new desktop machine for many people are the additional storage and peripheral connectivity options that you can get with a new machine.
The post Two Recent Laptops Compared appeared first on Glenn Berry.
]]>The post Some Quick Comparative CrystalDiskMark Results appeared first on Glenn Berry.
]]>A few weeks ago, I built a new Intel Skylake desktop system that I am going to start using as my primary workstation in the near future. I have some details about this system as described in Building a Z170 Desktop System with a Core i7-6700K Skylake Processor. By design, this system has several different types of storage devices, so I can take advantage of the extra PCIe bandwidth in the latest Intel Z170 Express chipset, and do some comparative testing.
The latest addition to the storage family is a brand new 512GB Samsung 950 PRO M.2 PCIe NVMe card that just arrived from Amazon yesterday afternoon. As of now, here is the available storage in this system:
Since I have an NVidia GeForce GTX 960 video card in one of the PCI 3.0 x16 slots, both that slot and the PCI 3.0 x16 slot that the Intel 750 is using will go down to x8 (which means 8 lanes instead of 16 lanes). The Intel Z170 Express chipset supports 26 PCIe 3.0 lanes, so you need to think about what devices you are trying to use. This system has Windows 10 Professional installed, so it has native NVMe drivers available from Microsoft.
I did some quick and dirty I/O testing today with CrystalDiskMark 5.02. The two NVMe devices are both using the native Microsoft NVMe drivers from Windows 10. As you can see below, both the Samsung 950 PRO and the Intel 750 PCIe NVMe cards have tremendous sequential and random I/O performance!
| Device | Sequential Reads | Sequential Writes | Random Reads | Random Writes |
| 512GB Samsung 950 Pro | 2595 MB/s | 1526 MB/s | 171755.6 IOPS | 104801.3 IOPS |
| 400GB Intel 750 | 2369 MB/s | 1081 MB/s | 177938.0 IOPS | 151642.1 IOPS |
| 512GB Samsung 850 Pro | 1104 MB/s | 532 MB/s | 100420.4 IOPS | 60765.1 IOPS |
| 6TB WD Red HD | 176 MB/s | 170 MB/s | 386.7 IOPS | 448.2 IOPS |
Table 1: Sequential and Random Results (Queue Depth 32, 1 Thread)
Keep in mind that the two Samsung 850 PRO SSDs are using hardware RAID1, which seems to help their sequential read performance, and that the two NVMe devices are both using the native Microsoft NVMe drivers, which may be hurting their performance somewhat.
Figure 1: 512GB Samsung 950 Pro M.2 PCIe NVMe Results
Figure 2: 400GB Intel 750 PCIe NVMe Results
Figure 3: 512GB Samsung 850 Pro SATA 3 (RAID 1) Results
Figure 4: 6TB Western Digital Red Results
The post Some Quick Comparative CrystalDiskMark Results appeared first on Glenn Berry.
]]>The post Building a Z170 Desktop System with a Core i7-6700K Skylake Processor appeared first on Glenn Berry.
]]>Some basic information about this system is shown in Figures 1, 2 and 3 below:
Figure 1: CPU-Z CPU Tab for Z77 Core i7-3770K System
Figure 2: CPU-Z Bench Tab for Z77 Core i7-3770K System
Figure 3: Geekbench 3.3.2 Scores for Z77 Core i7-3770K System
Even though this system is still pretty fast, I felt like I could do better in some areas, with a current generation Z170 chipset system with an Intel Core i7-6700K Skylake processor and 64GB of RAM. Last Saturday, I built this new system, and got Windows 10 Professional installed.
Here is the parts list for this system:
Initially, I’ll be using the Intel integrated graphics, but I may end up using an EVGA Geforce GTX 960 video card. But then again, I may not, since I want to reduce my power usage and have more PCIe lanes available for storage use.
I spent a couple of hours putting this system together, doing a pretty careful job with the cable management. When I had it ready to turn on for the first time (without putting the case sides on, which is always bad luck), I was rewarded with the CPU and case fans spinning, but no visible POST or video output at all. Luckily, the ASRock motherboard has a built-in LED diagnostic display, which was showing a code 55 error. Looking this up in the motherboard manual, I discovered that this was a memory-related issue. I removed two of the 16GB DDR4 RAM modules, and powered it back up, and this time I got a POST.
Going into the UEFI BIOS setup, I discovered that my ASRock Z170 Extreme 7+ motherboard had the initial 1.4 BIOS, while the latest version was 1.7. One of the fixes listed for version 1.7 is “improve DRAM compatibility”. I was able to flash the BIOS to 1.7 using the Instant Flash utility in the UEFI BIOS setup, and then I was able to use all four 16GB DDR4 RAM modules.
Next I created a RAID 1 array with my two 512GB Samsung 850 Pro SSDs, using the Intel RAID controller that is built-in to the Z170 chipset. I made sure the Intel 750 was not installed yet, and then I used an old USB optical drive to install Windows 10 Professional on to the RAID 1 array. Windows 10 Professional installed default drivers for the dual Intel 1GB NICs, so I was able to get on the internet and download and install all of the latest Windows 10 64-bit drivers for this motherboard from the ASRock web site. Then I used Windows and Microsoft Update to get Windows 10 fully patched.
Windows 10 recognized the Intel 750 using the default Microsoft NVMe drivers. I will benchmark using those drivers, and then compare the results to the native Intel NVMe drivers. So far, I have benchmarked the new system using CPU-Z and Geekbench 3.3.2. The basic information and scores for the new system is shown in Figures 4. 5, and 6 below:
Figure 4: CPU-Z CPU Tab for Z170 Core i7-6700K System
Figure 5: CPU-Z Bench Tab for Z170 Core i7-6700K System
Figure 6: Geekbench 3.3.2 Scores for Z170 Core i7-6700K System
Keep in mind, that beyond enabling XMP 2.1, I have not overclocked the new system yet. The new system is about 10-20% faster than the old system, from a CPU and memory perspective, depending on which benchmark you choose. In some respects, this is disappointing, but the real advantage of the new system is having twice the RAM, and a lot more potential I/O bandwidth with the Z170 Express chipset. With Windows 10 Professional, I have Hyper-V support (and the Core i7-6700K supports VT-x and VT-d), so I can run more VMs simultaneously. I also have two Intel 1GB NICS, which I plan to use together with NIC teaming in Windows 10.
I plan on getting at least one of the upcoming 512GB Samsung 950 Pro M.2 NVMe cards (and this motherboard has three Ultra M.2 slots) when they are released in October/November, so I will have plenty of disk space and I/O performance for the VMs.
| System | CPU-Z Single Thread | CPU-Z Multi-Thread | Geekbench Single-Core |
| Core i7-3770K | 1573 | 5920 | 3680 |
| Core i7-6700K | 1711 | 6815 | 4404 |
The post Building a Z170 Desktop System with a Core i7-6700K Skylake Processor appeared first on Glenn Berry.
]]>The post New Intel Data Center SSDs appeared first on Glenn Berry.
]]>The Intel SSD DC S3710 Series is a 2.5” form factor and comes in 200GB, 400GB, 800GB, and 1.2TB capacities. The Intel SSD DC S3610 Series comes in both 2.5” and 1.8” form factors with the 2.5” coming in 200GB, 400GB, 480GB, 800GB, 1.2TB, and 1.6TB capacities and the 1.8” coming in 200GB, 400GB, and 800GB capacities. Both of the new SSD Series will use a high-endurance version of Intel’s 20nm MLC NAND, with a SATA interface and will have greater write performance compared to the previous models. The endurance rating for the DC S3710 is 10 drive writes per day for the length of the five-year warranty, while the DC S3610 is rated at 3 drive writes per day for five years.
Intel quotes these performance figures for the DC S3710 Series:
Intel also quotes these performance figures for the 2.5-inch version of the DC S3610 Series:
The S3710 Series has better write performance, and higher write endurance compared to the S3610 Series. As always, the larger capacity models typically have better performance than the lower capacity models from the same series. These drives are supposedly available now, although I have not found them listed for sale anywhere just yet. Here is the suggested retail pricing from Intel:
These drives are a very attractive alternative to being price-gouged for internal flash-storage by your server vendor. I have had a number of customers use the older DC S3700 drives in new servers they have purchased, all with good results.
The post New Intel Data Center SSDs appeared first on Glenn Berry.
]]>The post Getting the Best Performance From an Intel DC P3700 Flash Storage Card appeared first on Glenn Berry.
]]>Initially, I was somewhat disappointed by the CrystalDiskMark results for this device, as shown in Figure 1. These results are not terrible, especially compared to most SANs or a single 6Gbps SAS/SATA SSD, but they were not nearly as good as I was expecting.
It turns out that Windows Server 2012 R2 has native NVMe support, with some generic, default drivers. These drivers let Windows recognize and use an NVMe device, but they do not give the best performance. Installing the native Intel drivers makes a huge difference in performance from these cards.
You will need to download and install the drivers first (which will require a reboot), and then you will want to download and install the Intel Solid State Drive Data Center Tool (which is a command-line only tool), so you can check out the card and update the firmware if necessary. The links for those two items are below:
Intel Solid-State Drive Data Center Family for PCIe Drivers
Intel Solid-State Drive Data Center Tool
You should also confirm that you are using the Windows High Performance Power Plan and that your BIOS is not using any power management settings that affect the voltage supplied to the PCIe slots in your server. Setting the BIOS power management to OS control or high performance is usually what you need to do, but check your server documentation.
Figure 1: CrystalDiskMark Results with Default Microsoft Driver
Here are the relevant results in text form:
Sequential Read : 682.778 MB/s
Sequential Write : 700.335 MB/s
Random Read 4KB (QD=32) : 381.311 MB/s [ 93093.6 IOPS]
Random Write 4KB (QD=32) : 282.259 MB/s [ 68910.9 IOPS]
After installing the native Intel drivers and updating the firmware, CrystalDiskMark looks much better! This is SAN-humbling performance from a single PCIe card that is relatively affordable.
Figure 2: CrystalDiskMark Results with Native Intel Driver
Here are the relevant results in text form:
Sequential Read : 1547.714 MB/s
Sequential Write : 2059.734 MB/s
Random Read 4KB (QD=32) : 646.816 MB/s [157914.2 IOPS]
Random Write 4KB (QD=32) : 419.740 MB/s [102475.6 IOPS]
This is a pretty dramatic difference in performance and it is another reason why database professionals should be paying attention to the details of their hardware and storage subsystem. Little details like this are easy to miss, and I have seen far too many busy server administrators not notice them.
The post Getting the Best Performance From an Intel DC P3700 Flash Storage Card appeared first on Glenn Berry.
]]>The post Building an Intel Haswell Desktop System for SQL Server 2012 Development appeared first on Glenn Berry.
]]>That means you will have to get a new motherboard to use a Haswell processor. You should get a Z87 chipset motherboard so that you can get a nice feature set with the motherboard. You want to make sure you get a motherboard that has four memory slots, instead of two, so you can have 32GB of RAM. You also want to pay attention to how many total SATA III ports you are getting. The Z87 chipset natively supports six SATA III ports (with hardware RAID support), and many motherboards will have an extra Marvell controller that can support either two or four more SATA III ports (with hardware RAID support). Having SATA III support is vital for modern SATA III solid state drives.
This system will have an Intel Core i7-4770K quad-core processor (plus hyper-threading), and 16GB of RAM (which can be expanded to 32GB of RAM for about $129.00 more). You could also back down to an Intel Core i5-4670K processor, which is a quad-core without hyper-threading. The Core i5 has a slightly lower base and Turbo clock speed and a smaller 6MB L3 cache compared to the Core i7, but it is $80.00 less at Micro Center. Both of these processors have Intel HD Graphics 4600 integrated graphics, which save you the cost and extra power usage of getting a discrete video card. You would probably have to spend about $100.00 to get a discrete video card that has better performance than the Intel HD Graphics 4600 integrated graphics, and I just don’t think you will need to do that for normal desktop usage.
This system will have better CPU and memory performance than many older production database servers (although you are limited to 32GB of RAM). Depending on what you want to do with this system, you may need or want to add additional SATA III SSDs or conventional hard drives. If you skip the optical drive, you can add seven more drives to this system before you run out of drive bays and SATA III ports. With enough fast SSDs, you may have better I/O performance (under a light load) than many production database servers. On the other hand, you won’t have redundant components (such as dual power supplies) like you would have with a rack-mounted database server. You will also have SATA consumer-level SSDs that cannot handle a heavy server workload with consistent performance as well as expensive enterprise-level SAS SSDs.
Below, I have links to the manufacturer information about each component, along with links to the components at Micro Center and NewEgg.
Motherboard Gigabyte GA-Z87X-UD4H Micro Center NewEgg
This Gigabyte motherboard has a total of eight SATA III slots between two separate controllers, with hardware RAID support (with no cache memory). In my experience, Gigabyte motherboards typically let you install server operating systems (including Windows Server 2008 R2 or Windows Server 2012) without any driver issues. Another alternative would be Windows 8 Professional, with Hyper-V, so you can run Windows Server 2012 in VMs. Micro Center is currently selling these for an insanely low sale price of $114.99, plus you get $40.00 more off the motherboard when you buy it with an eligible processor. Update: This motherboard actually uses an Intel NIC, that Intel (in their infinite wisdom) does not allow you to install the NIC drivers on a server operating system, such as Windows Server 2012.
Processor Intel Core i7-4770K Micro Center NewEgg
This is the “top of the line” Haswell desktop processor, with an unlocked multiplier. It is also the main Core i7 processor that Micro Center carries and is eligible for their $40.00 motherboard/processor bundle discount. It does support VT-x with Extended Page Tables for hardware virtualization support, but it does not support VT-d for directed IO with virtualization. If you are really concerned about VT-d, you can always get an Intel Core i7-3770 (that does have VT-d) from NewEgg or Micro Center. It will cost $309.99 at New Egg or $249.99 at Micro Center, and you would not get the motherboard bundle discount from Micro Center. I would say that with a decent number of good SSDs, you are much less likely to have any I/O bottlenecks with virtualization.
Power Supply Corsair CX500M Micro Center NewEgg
This is a high-quality, modular power supply that can easily support a system like this. It has an 80 Plus Bronze efficiency rating, which is pretty good. A lower wattage power supply is more efficient at lower output levels than a higher wattage power supply, so a power supply like this will save you money over time and be less expensive to buy.
Case Fractal Design Define R4 Micro Center NewEgg
These Fractal Design cases get universally excellent reviews, and they are very easy to work on when you are building the system, with excellent cable management features. They are also very quiet, with sound deadening foam inside. This case has eight 2.5”/3.5” drive bays and two USB 3.0 ports on the top front of the case. It does not have any silly gaming features.
Memory Crucial Ballistix Sport 16GB DDR3-1600 Micro Center NewEgg
This is pretty decent memory that is eligible for the $10.00 bundle discount from Micro Center when you buy it with a motherboard. NewEgg’s price is actually a little cheaper on this one item. You could also spend more money on higher speed memory, which you may or may notice that much benefit from in real life.
System Drive 256GB Samsung 840 Pro SSD Micro Center NewEgg
These are one of the top consumer SSDs available right now, with lots of good reviews. I have bought a number of these and they are very fast. They are also eligible for a $20.00 bundle discount from Micro Center when you buy them with a motherboard or processor.
Optical Drive 24X LG DVDRW OEM Micro Center NewEgg
I still like to have an optical drive, even though I rarely use it. If you have an external USB optical drive, you can use that to install the OS, or you could use a thumb drive.
As you can see below, if you are lucky enough to live near a Micro Center, you can save a significant amount of money by getting all of these components from Micro Center instead of NewEgg. You will have to pay sales tax at Micro Center, while you probably won’t at NewEgg. Most of the components (except the case) have free shipping from NewEgg.
| Item | Model | Micro Center Price | NewEgg Price |
| Motherboard | Gigabyte GA-Z87X-UD4H | 114.99 – 40.00 Bundle | 189.99 w/FS |
| Processor | Intel Core i7-4770K | 279.99 | 339.99 w/FS |
| Power Supply | Corsair CX500M | 59.99 – 10.00 MIR | 69.99 – 10.00 MIR |
| Case | Fractal Design Define R4 | 89.99 | 99.99 + 9.99 Ship |
| Memory | Crucial Ballistix Sport | 129.99 – 10.00 Bundle | 115.99 w/FS |
| System Drive | 256GB Samsung 840 Pro | 239.99 – 20.00 Bundle | 239.99 w/FS |
| Optical Drive | 24X LG DVDRW OEM | 15.99 | 17.99 w/FS |
| Total | 850.93 | 1073.92 |
I’ll also have a post up in the near future that talks about how to build an Intel Sandy Bridge-E or Intel Ivy Bridge-E system, that can have six-cores (plus hyper-threading) and 64GB of RAM. One of those systems will be considerably more expensive, due to a more expensive motherboard, more expensive processor, and more RAM.
The post Building an Intel Haswell Desktop System for SQL Server 2012 Development appeared first on Glenn Berry.
]]>The post The Accidental DBA (Day 3 of 30): Hardware Selection: Solid State Drives and Usage appeared first on Glenn Berry.
]]>I usually get two questions whenever I talk about hardware at a SQL Server event. The first one is always about virtualization, while the second is usually about Solid State Drives (SSDs) and how they should be used with SQL Server. I am often asked which components of a SQL Server database should be moved to flash-based storage as it becomes more affordable. Unfortunately, the answer is that it depends on your workload, and on where (if anywhere) you are experiencing I/O bottlenecks in your system, whether it is on your SQL Server data files, log files, or tempdb files.
Traditional magnetic spinning storage (a hard drive) does relatively well with sequential read and write operations. A single, 15K rpm Serial-Attached Storage (SAS) drive can do about 150-200MB/sec of sequential throughput. Where traditional hard drives have more issues is with random input/output operations, which is measured as Input Output Operations per Second (IOPS). Since a traditional hard drive is an electro-mechanical device, with a moving actuator arm that has to move the drive heads over a spinning disk platter to find and then access the data you need, you are dealing with much higher latency than you see with solid-state storage that has no moving parts. Because of this, a single, 15K rpm Serial-Attached Storage (SAS) drive can only do about 150-200 IOPS.
In contrast, a single 6Gbps SATA or SAS solid-state drive can do about 550MB/sec for sequential throughput and about 100,000 IOPS for random read/write operations. If that is not impressive enough, there are flash-based, high-end PCI-E storage devices that can do up to 6GB/sec for sequential throughput and about 1,000,000 IOPS for random read/write operations. There are also more affordable flash-based, PCI-E storage devices that can do up to 2GB/sec for sequential throughput and about 200,000 IOPS for random read/write operations.
Flash-based storage has become much more affordable and much more reliable over the past couple of years. There are some entry-level, Enterprise-class flash-based storage devices from Intel, such as the Intel DC S3700 line of SATA SSDs and the Intel 910 series PCI-E storage card line, that make it much more feasible to start moving more of your database infrastructure to solid-state storage.
Depending on your database size and your budget, it may make a lot of sense to move an entire user database (data files and log file) to solid-state storage, especially with a heavy OLTP workload. If you have multiple user databases running on a single instance of SQL Server, your I/O workload will become more randomized (with lots of random reads and writes), which means that you will see even more of a benefit from solid-state storage, which excels at random I/O operations.
If you don’t have enough space available to move all of your user database files to solid-state storage, you will need to be more selective about what types of files you move to solid-state storage. You will want to think about what type of I/O workload you have (which is related to your overall workload type), and which logical drives and which specific database files are seeing I/O bottlenecks.
For example, if you have multiple OLTP databases on the same instance of SQL Server, and they all have their log file on the same logical drive, that drive will be dealing with a highly random I/O workload. Moving those log files to solid-state storage could be an excellent solution to improve your I/O performance. Another example might be where you have very heavy tempdb usage, and you are seeing very high read and write latency for your tempdb data files (as opposed to allocation contention from a single tempdb data file). This would be another case where moving your tempdb data files to solid-state storage could be very beneficial.
Our online training (Pluralsight) courses that can help you with this topic:
The post The Accidental DBA (Day 3 of 30): Hardware Selection: Solid State Drives and Usage appeared first on Glenn Berry.
]]>The post A SQL Server Hardware Tidbit a Day – Day 19 appeared first on Glenn Berry.
]]>RAID is a technology that allows the use of multiple hard drives, combined in various ways, to improve redundancy, availability and performance, depending on the RAID level used. When a RAID array is presented to a host in Windows, it is called a logical drive. Using RAID, the data is distributed across multiple disks in order to:
• Overcome the I/O bottleneck of a single disk
• Get protection from data loss through the redundant storage of data on multiple disks
• Avoid any one hard drive being a single point of failure
• Manage multiple drives more effectively
Regardless of whether you are using traditional magnetic hard drive storage or newer solid state storage technology, most database servers will employ some sort of RAID technology. RAID improves redundancy, improves performance, and makes it possible to have larger logical drives. RAID is used for both OLTP and DW workloads. Having more spindles in a RAID array helps both IOPS and throughput, although ultimately throughput can be limited by a RAID controller, HBA, NIC, or the PCI-E slot that is being used.
Keep in mind that while RAID does provide redundancy in your data storage, it is not a substitute for an effective backup strategy or a high availability/disaster recovery (HA/DR) strategy. Regardless of what level of RAID you use in your storage subsystem, you still need to run SQL Server full, differential, and log backups as necessary to meet your recovery point objective (RPO) and recovery time objective (RTO) goals.
There are a number of commercially-available RAID configurations, which I’ll review over the coming sections, and each has associated costs and benefits. When considering which level of RAID to use for different SQL Server components, you have to carefully consider your workload characteristics, keeping in mind your hardware budget. If cost is no object, I am going to want RAID 10 for everything, i.e. data files, log file, and tempdb. If my data is relatively static, I may be able to use RAID 5 for my data files. It is also fairly common to use RAID 5 for SQL Server backup files.
During the discussion, I will assume that you have a basic knowledge of how RAID works, and what the basic concepts of striping, mirroring, and parity mean.
RAID 0 simply stripes data across multiple physical disks. This allows reads and writes to happen simultaneously, across all of the striped disks, so offering improved read and write performance, compared to a single disk. However, it actually provides no redundancy whatsoever. If any disk in a RAID 0 array fails, the array is off-line and all of the data in the array is lost. This is actually more likely to happen than if you only have a single disk, since the probability of failure for any single disk goes up as you add more disks. There is no disk space loss for storing parity data (since there is no parity data with RAID 0), but I don’t recommend that you use RAID 0 for database use, unless you enjoy updating your resume! RAID 0 is often used by serious computer gaming enthusiasts in order to reduce the time it takes to load portions of their favorite games. They do not keep any important data on their “gaming rigs”, so they are not that concerned about losing one of their drives. Even this usage is declining over time as SSDs become more affordable.
You need at least two physical disks for RAID 1. Your data is mirrored between the two disks, i.e. the data on one disk is an exact mirror of that on the other disk. This provides redundancy, since you can lose one side of the mirror without the array going off-line and without any data loss, but at the cost of losing 50% of your space to the mirroring overhead. RAID 1 can improve read performance, but can hurt write performance in some cases, since the data has to be written twice.
On a database server, it is very common to install the Windows Server operating system on two of the internal drives, configured in a RAID 1 array, and using an embedded internal RAID controller on the motherboard. In the case of a non-clustered database server, it is also common to install the SQL Server binaries on the same two drive RAID 1 array as the operating system. This provides basic redundancy for both the operating system and the SQL Server binaries. If one of the drives in the RAID 1 array fails, you will not have any data loss or down-time. You will need to replace the failed drive and rebuild the mirror, but this is a pretty painless operation, especially compared to reinstalling the operating system and SQL Server!
RAID 5 is probably the most commonly-used RAID level, for both general file server systems and for SQL Server. RAID 5 requires at least three physical disks. The data, and calculated parity information, is striped across the physical disks by the RAID controller. This provides redundancy because if one of the disks goes down, then the missing data from that disk can be reconstructed from the parity information on the other disks. Also, rather than losing 50% of your storage, in order to achieve redundancy, as for disk mirroring, you only lose 1/N of your disk space (where N equals the number of disks in the RAID 5 array) for storing the parity information. For example, if you had six disks in a RAID 5 array, you would lose 1/6th of your space for the parity information. As you add more disks to a RAID 5 array, the chances of losing any one of the disks goes up (due to simple statistics), so that is a reliability consideration for larger arrays.
However, you will notice a very significant decrease in performance while you are missing a disk in a RAID 5 array, since the RAID controller has to work pretty hard to reconstruct the missing data. Furthermore, if you lose a second drive in your RAID 5 array, the array will go offline, and all of the data will be lost. As such, if you lose one drive, you need to make sure to replace the failed drive as soon as possible. RAID 6 stores more parity information than RAID 5, at the cost of an additional disk devoted to parity information, so you can survive losing a second disk in a RAID 6 array.
Finally, there is a write performance penalty with RAID 5, since there is overhead to write the data, and then to calculate and write the parity information. As such, RAID 5 is usually not a good choice for transaction log drives, where we need very high write performance. I would also not want to use RAID 5 for data files where I am changing more than 10% of the data each day. One good candidate for RAID 5 is your SQL Server backup files. You can still get pretty good backup performance with RAID 5 volumes, especially if you use backup compression and striped backups.
When you need the best possible write performance, you should consider either RAID 0+1 or, preferably, RAID 10. These two RAID levels both involve mirroring (so there is a 50% mirroring overhead) and striping but differ in the details in how it is done in each case.
In RAID 10 (striped set of mirrors), the data is first mirrored and then striped. In this configuration, it is possible to survive the loss of multiple drives in the array (one from each side of the mirror), while still leaving the system operational. Since RAID 10 is more fault tolerant than RAID 0+1, it is preferred for database usage.
In RAID 0+1 (mirrored pair of stripes) the data is first striped, and then mirrored. This configuration cannot handle the loss of more than one drive in each side of the array.
RAID 10 and RAID 0+1 offer the highest read/write performance, but incur a roughly 100% storage cost penalty, which is why they are sometimes called “rich man’s RAID”. These RAID levels are most often used for OLTP workloads, for both data files and transaction log files. As a SQL Server database professional, you should always try to use RAID 10 if you have the hardware and budget to support it. On the other hand, if your data is less volatile, you may be able to get perfectly acceptable performance using RAID 5 for your data files. By “less volatile”, I mean if less than 10% of your data changes per day, then you may still get acceptable performance from RAID 5 for your data files(s).
The post A SQL Server Hardware Tidbit a Day – Day 19 appeared first on Glenn Berry.
]]>