Over the last month we've been teaching in Europe and I haven't had much time to focus on benchmarking, but I've finally finished the first set of tests and analyzed the results.

You can see my benchmarking hardware setup here, with the addition of the Fusion-io ioDrive Duo 640GB drives that Fusion-io were nice enough to lend me.

In this set of tests I wanted to check three things for a sequential write-only workload (i.e. no reads or updates)

  • Best file layout with the hardware I have available
  • Best way to format the SSDs
  • Whether SSDs give a significant performance gain over SCSI storage

The Fusion-io SSDs can be formatted four ways:

  • Regular Windows format
  • Fusion-io's format
  • Fusion-io's improved write performance format
  • Fusion-io's maximum write performance format

I'm using one of the 640GB SSDs in my server, which presents itself as two 320GB drives that I can use individually or tie together in a RAID array. The actual capacity varies depending on how the drives are formatted:

  • With the Windows and normal Fusion-io format, each of the 320GB drives has 300GB capacity
  • With the improved write performance format, each of the 320GB drives has only 210GB capacity, 70% of normal
  • With the maximum write performance format, each of the 320GB drives has only 151GB capacity, 50% of normal

In my tests, I want to determine whether the loss in capacity is worth it in terms of a performance gain. The SSD format is performed using Fusion-io's ioManager tool, with their latest publicly-released driver (1.2.7.1).

My tests involve 16 connections to the server, running server-side code to insert 6.25GB each into a table with a clustered index, one row per page. The database is 160GB with a variety of file layouts:

  • 1 x 160GB file
  • 2 x 80GB files
  • 4 x 40GB files
  • 8 x 20GB files
  • 16 x 10GB files

These drop down to 128/64/32/etc when using a single 320GB drive with the maximum write capacity format. The log file is pre-created at 8GB and does not need to grow during the test.

I tested each of the five data file layouts on the following configurations (all using 1MB partition offsets, 64k NTFS allocation unit size, 128k RAID stripe size – where applicable):

  • Data on RAID-10 SCSI (8 x 300GB 15k), log on RAID-10 SATA (8 x 1TB 7.2k)
  • Data round-robin between two RAID-10 SCSI (each with 4 x 300GB 15k and one server NIC), log on RAID-10 SATA (8 x 1TB 7.2k)
  • Data on two 320GB SSDs in RAID-0 (each of the 4 ways of formatting), log on RAID-10 SATA (8 x 1TB 7.2k)
  • Log and data on two 320GB SSDs in RAID-0 (each of the 4 ways of formatting)
  • Log and data on single 320GB SSD (each of the 4 ways of formatting)
  • Log and data on separate 320GB SSDs (each of the 4 ways of formatting)
  • Log and data round-robin between two 320GB SSDs (each of the 4 ways of formatting)

That's a total of 22 configurations, with 5 data file layouts in each configuration – making 110 separate configurations. I ran each test 5 times and then took an average of the results – so altogether I ran 550 tests, for a cumulative test time of just less than 110 million seconds (12.7 days) over the last 4 weeks.

And yes, I do have a test harness that automates a lot of this so I only had to reconfigure things 22 times manually. And no, for these tests I didn't have wait stats being captured. I've upgraded the test harness and now it captures wait stats for each test – that'll come in my next post.

On to the results… bear in mind that these results are testing a 100GB sequential insert-only workload and are not using the full size of the disks involved!!!

Data on SCSI RAID-10, log on SATA RAID-10

multidatafiles Benchmarking: Introducing SSDs (Part 2: sequential inserts) 

I already blogged about these tests here last week. They prove that for this particular workload, multiple data files on the same RAID array does give a performance boost – albeit only 6%.

The best performance I could get from the SCSI/SATA configurations was completing the test in 1755 seconds.

Data and log on 640GB RAID-0 SSDs (Data on 640GB RAID-0 SSDs, log on SATA RAID-10)

 3 640 Benchmarking: Introducing SSDs (Part 2: sequential inserts)

The performance whether the log file was on SATA or on the SSD was almost identical, so I'm only including one graph, in the interests in making this post a little shorter.

These results clearly show that the SSDs have to be formatted correctly to get any performance out of them. The SSDs performed the same for all data file configurations until performance almost doubles when the number of data files hits 16. I tested 32 and 64 files and didn't get any further increase. My guess here is that I had enough files that when checkpoints or lazywrites occured, the behavior was as if I was doing a random-write workload rather than sequential-write workload.

The best performance I could get here was with 16 files and the maximum-write format when the test completed in 934 seconds, 1.88x faster than the best SCSI time. This is only 13 seconds slower than the normal format which gives 100% more capacity.

Data and log on single 320GB SSD

 3 1x320 2 Benchmarking: Introducing SSDs (Part 2: sequential inserts)

Here the performance truly sucked when the SSD wasn't formatted correctly. Once it was, the performance was roughly the same for 1, 2, or 4 files but degraded by almost 50% with normal formatting for 8 or 16 files. With improved-wait and maximum-write formatting, the performance was the same as for the 640GB RAID-0 SSD array, but the sharp performance increase with 16 files only happened with the maximum-write formatting.

Data and log on separate 320GB SSDs

 3 2x320 2 Benchmarking: Introducing SSDs (Part 2: sequential inserts)

No major difference here - same characteristics as before when formatted correctly, and the best performance coming from maximum-write formatting and 16 data files.

This configuration gave the best overall performance – 909 seconds – 1.93x the bext performance from the SCSI storage.

Data and log round-robin between separate 320GB SSDs

 3 2x320rr Benchmarking: Introducing SSDs (Part 2: sequential inserts)

No major differences from the previous configuration.

Best-case performance for each number of data files

 best1 Benchmarking: Introducing SSDs (Part 2: sequential inserts) best2 Benchmarking: Introducing SSDs (Part 2: sequential inserts)

 best4 Benchmarking: Introducing SSDs (Part 2: sequential inserts) best8 Benchmarking: Introducing SSDs (Part 2: sequential inserts)

 best16 Benchmarking: Introducing SSDs (Part 2: sequential inserts)

Clearly the SSDs outperform the SCSI storage for these tests, but not by very much. The improvement factor varied by the number of data files:

  • 1: SSD was 1.11x faster than SCSI
  • 2: SSD was 1.09x faster than SCSI
  • 4: SSD was 1.06x faster than SCSI
  • 8: SSD was 1.04x faster than SCSI
  • 16: SSD was 2.03x faster than SCSI

The configuration of 16 data files on one SSD and the log on the other SSD, with maximum-write format for both, was the best overall performer, beating the best SCSI configuration (8 data files) by a factor of 1.93.

Summary

Reminder: this test was 100GB of sequential inserts with no reads or updates (i.e. no random IO). It is very important to consider the limited scenario being tested and to draw appropriate conclusions

Several things are clear from these tests:

  1. The Fusion-io SSDs do not perform well unless they are formatted with Fusion-io's tool, which takes seconds and is very easy. I don't see this as a downside at all, and it makes sense to me.
  2. For sequential write-only IO workloads, the improved-write and maximum-write SSD formats do not produce a performance gain and so the loss in storage capacity (30% and 50% respectively) is not worth it.
  3. For sequential write-only IO workloads, the SSDs do not provide a substantial gain over SCSI storage (which is not overloaded).

All three of these results were things I'd heard anecdotally and experienced in ad-hoc tests, but now I have the empirical evidence to be able to state them publicly (and now so do you!).

These tests back-up the assertion I've heard over and over that sequential write-only IO workloads are not the best use-case for SSDs.

One very interesting other result came from these tests – moving to 16 data files changed the characteristics of the test to a more random write-only IO workload, and so the maximum-write format produced a massive performance boost – almost twice the performance of the SCSI storage!

The next set of tests is running right now – 64GB of inserts into a clustered index with a GUID key – random reads and writes in a big way. Early results show the SSDs are *hammering* the performance of the SCSI storage – more in a week or so!

Hope you find these results useful and thanks for reading!