THAT'S NOT THE POINT!!!

The simple point is that bad (or sloppy/lazy) design cannot be tuned. If you think that data type choice, nullability, keys – don't really matter – you won't scale. It is possible that you may completely fail because of this. Have you ever heard (or possibly said?), let's just get this done – we'll worry about performance later? If you haven't heard it, I'm surprised! I hear this all the time…

Yesterday I gave a lecture at SQLPASS about GUIDs. It wasn't the most well attended (under 200 people) but I suspect that's because of two things: first, our good friend Bob Ward was speaking at the same time (and there were actually a bunch of really good sessions!) AND the simple fact that GUIDs aren't sexy (I agree!). Also, I think that a few folks may have presumed that what I was going to talk about (maybe even solely talk about?) was fragmentation. And, while fragmentation is the most outwardly visable problem with GUIDs – it's by NO MEANS the only problem. And, so I thought I'd blog a few things to think about/remember when trying to design/architect a SQL Server system. Clearly there's got to be a balance between the amount of time you're going to spend on design vs. just "getting it done" but that doesn't mean that NOTHING MATTERS or that you can just do anything with a table and "SQL Server will just 'handle' it." OK, I take that back – SQL Server won't have a choice other than to "just handle it" but some things it just CANNOT handle well. Perform and scalability will suffer and again, your application may fail.

One of the resounding principles of my session (and most of my design talks in general ;-), is that design matters. In fact, in my summary, I said that 3 things really matter in terms of good SQL Server database design/archictecture:

  1. Know your data - this helps you make the right decisions in terms of data types/nullability and churn helps with long term maintenance goals (and initial maintenance plans) 
  2. Know your workload - this helps you make the right decisions about locking/isolation, optimizing procedural code and indexing strategies (and these are the KEY to a FAST and scalable system)
  3. Know how SQL Server works  - this is the one that's often overlooked. And, without information such as "the primary key is enforced by a clustered index and the clustering key is added to ALL nonclustered indexes" then you may inadvertently create a database that grows faster and larger than anticipated where performance slows to a crawl and even management/maintenance becomes a [really HUGE] challenge.

So, while I could go on for ages here I just want to expand on that last point: Know how SQL Server works. Specifically, I want to tie together the PK -> CL Key -> NC Indexes along with the "disk space is cheap" mantra that I also hear ALL THE TIME.

OK – so let's break this down a bit… No matter what your clustered index is – it should be narrow. I do not choose my clustered index for range queries and my choice for the clustering key is NEVER accidental.

Why – because it has a HUGE impact on overall performance. To prove the point (and this was the concept around which my session focused), I created 3 different versions of the SAME "Sales" database. I wanted to show ALL of the impacts of a poorly chosen key – both as CL and really just as a size issue. It's only 12 more bytes than an int, right? What harm can it cause… just wait!

So – to start, I loaded all three databases with roughly 6.7 million rows… and, I made sure everything was clean and contigious so that I'd have the same starting point for all of the tables. I actually strategically started things in one filegroup and then moved things over to another filegroup with 2 files so that I could get some benefits from having multiple files as well (see Paul's excellent post on why a RW filegroup should generally have 2-4 files here: Benchmarking: do multiple data files make a difference?). So, at the initial start I have three databases:

SalesDBInts (inital size with Sales at 6.7 million rows = 334MB):

  • Customers – has an ever-increasing identity (int) PK (4 bytes)
  • Employees – has an ever-increasing identity (int) PK (4 bytes)
  • Products – has an ever-increasing identity (int) PK  (4 bytes)
  • Sales – has an ever-increasing identity (int) PK and FKs to Customers, Employees and Products (row size = 27 bytes)

SalesDBGUIDs (inital size with Sales at 6.7 million rows = 1000MB):

  • Customers – has a randomly generated (using the NEWID() function) GUID PK (16 bytes)
  • Employees – has a randomly generated (using the NEWID() function) GUID PK (16 bytes)
  • Products – has a randomly generated (using the NEWID() function) GUID PK (16 bytes)
  • Sales – has a randomly generated (using the NEWID() function) GUID PK (16 bytes) and FKs to Customers, Employees and Products (row size 75 bytes)

SalesDBSeqGUIDs (inital size with Sales at 6.7 million rows = 961MB):

  • Customers – has a sequentially generated (using the NEWSEQUENTIALID() function) GUID PK (16 bytes)
  • Employees – has a sequentially generated (using the NEWSEQUENTIALID() function) GUID PK (16 bytes)
  • Products – has a sequentially generated (using the NEWSEQUENTIALID() function) GUID PK (16 bytes)
  • Sales – has a sequentially generated (using the NEWSEQUENTIALID() function) GUID PK (16 bytes) and FKs to Customers, Employees and Products (row size 75 bytes)

OK, so here's where the session really starts… I run 10K inserts into the Sales table in each database and then I check and see what happens:

  • 10K rows in SalesDBInts takes 00:17 seconds
  • 10K rows in SalesDBGUIDs takes 05:07 minutes
  • 10K rows in SalesDBSeqGUIDs takes 01:13 minutes

This is already SCARY and should go down into the "Are you kidding me category?" but I also have to add that the hardware and setup for these first few tests are just highlighting a whole myriad of problems. First, I was running with a somewhat crummy setup – a dual-core laptop with only 3GB of memory and this database was on an external USB drive. Certainly not enterprise storage but also not an enterprise size either. For the size of the db we should have been able to do better… wait, we did – with the int-based database things went really well. Only the other two really stunk and the sequential GUID based database definitely faired better than the random (of course – fragmentation, right?). And, yes, that's a part of it… but there's more. And, I thought… no, this can't be right. Let me try again… run 2:

  • 10K more rows in SalesDBInts takes 00:24 seconds
  • 10K more rows in SalesDBGUIDs takes 07:21 minutes
  • 10K more rows in SalesDBSeqGUIDs takes 01:10 minutes

Well, that seems pretty consistent but wow – the random GUID db is really NOT fairing very well… let's try it again:

  • 10K more rows in SalesDBInts takes 00:26 seconds
  • 10K more rows in SalesDBGUIDs takes 10:10 minutes
  • 10K more rows in SalesDBSeqGUIDs takes 01:12 minutes

OK, so you have GOT to be wondering why things are going so horribly wrong? The fragmentation is leading to more page IOs and those also have be put in cache so we're needing a larger and larger cache to handle our GUID database… none of this is good and means you need bigger machines and/or something else to help you out. With the ever-increasing patterns created by the other database we're requiring fewer pages to be read and fewer pages to be cached – these databases are performing somewhat consistently…

OK – so what can we do… let's try FIRST dealing with the fragmentation. To keep it simple, I went to the Books Online for sys.dm_db_index_physical_stats – example D. Using sys.dm_db_index_physical_stats in a script to rebuild or reorganize indexes. This is pretty good but since these databases had never seen a REBUILD (and definitely not a FILLFACTOR setting, I had to tweak the script slightly to include a generic 90% fillfactor). Here's the line that I modified:

SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD WITH (FILLFACTOR = 90)'

I ran this in ALL three databases but there wasn't much to do in any of them except for the GUID-based database:

SalesDBInts (5 seconds)
Executed: ALTER INDEX [ProductsPK] ON [dbo].[Products] REBUILD WITH (FILLFACTOR = 90)

SalesDBGUIDs (7:51 minutes)
Executed: ALTER INDEX [SalesPK] ON [dbo].[Sales] REBUILD WITH (FILLFACTOR = 90)
Executed: ALTER INDEX [IX_SalesToProductsFK] ON [dbo].[Sales] REBUILD WITH (FILLFACTOR = 90)
Executed: ALTER INDEX [CustomersPK] ON [dbo].[Customers] REBUILD WITH (FILLFACTOR = 90)
Executed: ALTER INDEX [ProductsPK] ON [dbo].[Products] REBUILD WITH (FILLFACTOR = 90)

SalesDBSeqGUIDs (9 seconds)
Executed: ALTER INDEX [ProductsPK] ON [dbo].[Products] REBUILD WITH (FILLFACTOR = 90)
Executed: ALTER INDEX [CustomersPK] ON [dbo].[Customers] REBUILD WITH (FILLFACTOR = 90)

Then, I ran my inserts again…

  • 10K more rows in SalesDBInts takes 00:25 seconds
  • 10K more rows in SalesDBGUIDs takes 05:05 minutes
  • 10K more rows in SalesDBSeqGUIDs takes 01:07 minutes

OK, that's better… and, it will be more consistent because of the reduction in splits which also helps to keep the table smaller and therefore requires both less disk space as well as less cache. OK, but, there's just still absolutely NO comparison between the sequential and the random, eh? Maybe we need more disk space, faster drives and/or more memory… let's try killing our problem with iron (yes, an oldie but a goodie!). And, yes, this is going to help…

The first hardware change that I made was that I moved these to an internal SSD drive… and, ran my test again. Let's get rid of all the random IO problems. That's got to help, eh?

  • 10K more rows in SalesDBInts takes 00:04 seconds
  • 10K more rows in SalesDBGUIDs takes 01:15 minutes
  • 10K more rows in SalesDBSeqGUIDs takes 01:02 minutes

WOW – THAT's AWESOME… killing it with iron brings it VERY close to the speed of the Sequential GUIDs as we're completely eliminating the random IOs. This makes our backups faster, etc. but it still doesn't reduce the memory required because of the pages that are going to be required on insert. And, if you have a very large table with a lot of historical data that wouldn't have otherwise needed to be brought into cache this is a BIG problem especially for much larger tables.

I had quite a few more stuff in my demos but it brings us to a really good point… what are our options and what should we consider? First and foremost, how much control do you have? Did you design/architect your system and if so – how much work are you willing to put into it from here? Or, is this an application over which you have no control? Let's take the obvious…

If this is an application over which you have no control then you really have only 2 options:

  1. MAINTENANCE (with a good FILLFACTOR)
  2. Consider killing it with iron where the most obvious gains are going to be disk IOs (ie. SSDs for the data portion) and memory…

If this is a system over which you do have control… then, I'd suggest changing the CL key at a minimum. Then, I'd make sure you have good maintenance setup for your nonclustered indexes because those will most certainly be fragmented. Then, I'd slow consider changing over your FKs to use the CL key (identity ideally) and then maybe – eventually – you can remove those GUIDs altogether. But, this is NOT an easy thing to do…

If your CL key is a PK then here are your steps:

  1. Take the database offline (sorry, I’m just the messenger!!)
  2. Disable the FKs
  3. Disable the nonclustered indexes
  4. Drop the clustered PK (alter table)
  5. Optionally, add an identity column?
  6. Create the new clustered index
  7. Create the PK as nonclustered
  8. Enable the nonclustered indexes (alter index…rebuild)
  9. Enable the FKs with CHECK (this is very important)
  10. Bring the database online

And, there are certainly derivitives of this but the long story short is that it's going to be a painful process. And, I know some of you have work to do… so, I'll end this post here as well as give you a few links to help you continue learning about these issues. The more you know about SQL Server the better your database design and the longer your database will be healthy and scalable.

I've spoken about this in many posts and many of our SQLMag Q&A posts:

Can I actually end this post with – enjoy! (remember, I'm just the messenger!! :)
kt