New hardware to play with: Fusion-io SSDs

Christmas comes but once a year… really? Then mine just came early on this afternoon's UPS truck.

The very nice folks at Fusion-io just sent me two of their fully-loaded top-of-the-line ioDrive Duos with 640GB of solid-state flash memory in each. This is really extra-nice of them because on Dell's Small Business website they're currently retailing for $12800 *each*. Expensive? Yes. Worth it? That's what I'm hoping to prove.

There's nothing like expensive, pretty hardware to get me excited… here's what they look like:

Now, above I said 'expensive', and these are, but they pack some pretty amazing specs in terms of read/write bandwidth so you're getting a lot of bangs for you bucks. But where does it really make sense to drop the bucks for the biggest bangs? To answer that I'm planning to do a whole series of blog posts as part of my benchmarking efforts to investigate which operations can benefit the most from these drives. With 1.2TB of SSD storage I'll be able to plug these into one of my test systems here and run comparisons against 15k SCSI and 7.2k SATA drives.

Anyway, there's a lot of hype about the speed of SSDs, and also a lot of angst about SSDs not being Enterprise-ready. I don't agree with them not being Enterprise-ready – in fact, fellow-MVP Greg Linwood, who runs (among other things) our partner company SQLskills Australia, already has a bunch of customers with Fusion-io drives deployed in their enterprises successfully. As with any critical hardware infrastructure (especially cutting-edge stuff like this), the key to success is having everything setup correctly so I'll be blogging about all my experiences with them.

To summarize, I'm very excited! I've been wanting to get my hands on some serious SSD hardware for a couple of years now so I can do some *real* testing – it doesn't get better than this!

Shoot me an email or leave a comment if there's something you're interested in seeing tested.

PS Full disclosure: yes, of course Fusion-io sent me these because they're getting publicity from me blogging about them, but we don't have any editorial/veto agreement. I want to be able to recommend these to our enterprise clients and the only way to honestly do that is to play with them myself – so it's a win-win for both of us. And you guys get to test them vicariously through me, so it's a win-win for you too :-)

Stay tuned…

16 thoughts on “New hardware to play with: Fusion-io SSDs

  1. Ah, I’d be curious to see how much better performance you can get with these babies when setting up tempdb on them: I have several customers having tons of queries splling out to tempdb because of some crazy ORDER BY’s on 5 million rows datasets and I’m suspecting (hoping?) that this could help them…

  2. After your presentation at the NOVA SSUG, two gentlement from Fusion IO provided some interesting statistics on these drives. However, there wasn’t a discussion on MTBF or MTTF. :( Since these are expensive and out of the price range for many small businesses, would you recommend the following:

    Use two of these in a RAID 1 configuration on a HP DL380 G5 to store tempb, transaction log and/or clustered indices on it. The SAN would be used for storing mdf and/or transaction log (if not placed on fusion io drive).

  3. Really exciting
    It would be interesting to compare performance between 2 scenarios: 1. data is pinned to RAM; 2. data in SSD.

  4. I really would like to see a valid benchmark with using one of the SSDs for the transaction log against a good SAN — both for a busy OLTP database and against a heavily used data warehouse (not analysis services, but just a relational DW).

  5. The TPC test has 144Gb RAM for a 100Gb DB test, which isn’t very interesting as far as measuring the benefits from SSD is concerned. The real value in SSDs/FusionIO with SQL Server workloads is to provide fast access to *uncached* data, where it’s random, synchronous I/O capacity greatly exceeds HDDs..

  6. I think the concern about enterprise readiness has to do with how long the media can actually last. The current technology can only sustain a limited number of writes. Richard Campbell pointed out to me that the warranty on these things is only one year, whereas traditional media has a five year warranty. I think he makes a valid point; although enterprises do need to be prepared to swap out failed hardware on a somewhat regular basis, there is a very real risk that putting one of these in could seriously accelerate the cycle.

    That said, I have customers using these things–primarily for tempdb–and they do work quite well. The only real issue aside from the potential increase in failure rate is clustered servers, which require being "hacked" a bit into an unsupported configuration in order to play nicely with anything not on the SAN.

  7. I saw that these devices are configurable as to their write performance strategy. It seems you can trade available space for increased speed. Please test the different configurations.

    We are looking at these devices for implementing a TempDB volume which is a frequent bottleneck in a 3rd party application.

  8. I wish these gusy would make a smaller and more affordable version of this. Something in the 100-200GB range would be perfect fo many of my customers. Our application does lots of small random writes to SQL Server. Customer’s love to put that on their RAID 5 SAN and they wonder why it performs so poorly.

  9. I tested these cards a while back. HP OEM’s them as IO Accelerator cards. At the time, the drivers seem a bit unstable when the card was used in a blade server. I experienced a few crashes. The IO speed was great and I’d love to compare my observations with Paul’s when he completes the test, but I had the same issue as Adam M pointed out: 1) No/Little available data as far as the longevity of these cards 2) They are extremely difficult to use in clustered environments – at least with Windows Server. If anyone has tried using them with a different OS, please post your findings.

    The folks at Fusion IO did mention that some clients put several of these cards in one server and use 10GbE or Infiniband mixed with iSCSI to essentially setup a mini-SAN. TIn this setup, there is no ‘hacking’ required to properly cluster your Windows server, but this is one expensive setup.

  10. Michael,

    Check out http://www.fusionio.com. They have several different models, one of them starting at 80GB in size. Paul R is testing the Duo model, but I believe you can get the "standard" model (the one I tested) in much smaller size.

  11. I’m curious if they (Fusion-io) have solved this problem:

    http://images.anandtech.com/reviews/tradeshows/CES/2010/Micron-RealSSD-C300/writesaturation.jpg

    This is a great report (doesn’t matter who they tested) that shows the sustained performance over time. It seems that is where the "hard stuff" happens is when the drive is full and now has to service application requests and perform its own "cleanup" at the same time.

    Its not impossible but all SSDs I’ve seen have huge latency spikes when trying to operate over the long term – that kills the application performance we were looking to accelerate.

  12. I saw some questions on the usage models and many users here already have the right idea. I just wanted to take this opportunity to re-cap some of that.

    Depending on the database size, the most performance gain can be had by putting the entire database on the Fusion-io drives. There are off the shelf servers that can accomodate upto 4TB worth of Fusion-io storage. However, towards the goal of a more scalable architecture as well as towards keeping the SAN investment useful, the following components can be put on the Fusion-io storage:

    1.Tempdb
    2.Put indexes on a separate file group and then put the file-group on Fusion-io
    3.Put Frequently Accessed Tables on a separate file group and then put the file-group on Fusion-io
    4.In some cases, Put the very large tables (like session tables) on Fusion-io
    5.When partition tables are used, the more relevant data can be put on Fusion-io

    On the topic of high-availability:

    This technology puts a new perspective on where to put storage (actually it is a bit retro if think hard enough) to drive high performance for your SQL Server. Clearly, this is an in-server storage model. So High-availability methods that don’t rely on shared storage make sense. Some of them that I promote are:
    1. SQL Server Database Mirroring – Wine.com uses it in synchronous mode.
    2. SQL Server cluster built on Windows 2008 R2 Multi-site clustering. This definitely requires a third party software to keep the data in sync between two cluster nodes but it is a viable solution.
    3. Innovative new solution called Gridscale from xkoto. This truly load-balances SQL server nodes with a gatekeeper appliance.

    I am including the path to the wine.com case-study that involved a MS SQL server and provides architectural details and the sort of performance improvement that is possible.

    http://community.fusionio.com/cfs-filesystemfile.ashx/__key/CommunityServer.Components.PostAttachments/00.00.00.01.81/Wine_5F00_Dot_5F00_Com_5F00_Case_5F00_Study.pdf

    Thanks.

    sumeet@fusionio.com
    Principal Solutions Architect, Fusion-io

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.