It’s been quite a while since I wrote a blog post on a flight but I happened to be playing with a CTP-6 VPC on the way down to Orlando and thought I’d do the first in a few posts on a cool feature of SQL Server 2008 – Sparse Columns.
 
One problem in database schema design is how to store heterogenous data types with many properties efficiently. Here’s an example (contrived) scenario – consider a document repository that can store up to 50 different kinds of documents, with a a few common properties (like document type, document name, last modification time) and 20 totally different attributes for each column type. The document repository needs to store the common fields, plus the per-document-type attributes for each document.


[Edit: I had several comments - thank you - that this example is too contrived and that normalization could give some benefit to it - ok- the example is *very* simple to illustrate the concepts involved. Now imagine the same example with thousands of document types and thousands of user-defined attributes per document - normalization is no longer applicable. This is the Sharepoint Server scenario that drives this feature.]
 
What are some of the options for this in SQL Server 2005?


Single table
Define a table with 1000+ columns to have allow all the document types to be stored in a single table. The first 20 columns could be for the common properties and then each subsequent set of columns stores the attributes for a single document type (e.g. columns 20-39 store the attributes for documents of type 1, columns 40-59 store the attributes for documents of type 2, and so on).


Comments on this architecture:



  • There is a huge amount of wasted space for each table row – as only a maximum of 40 columns (common fields plus per-document-type attributes) will have values in each 1000+ column record. Even if all the attributes are stored as nullable variable length columns (e.g. SQLVARIANT) then there’s still a minimum of 1-bit of storage required per column (for the null bitmap entry). There’s also the CPU overhead of having to crack the SQLVARIANT columns, and the storage overhead of having them in the first place.
  • The 8060 byte record size limit effectively limits the number and datatypes of columns that can be defined per record. You could easily have 1000 4-byte INT columns per record, for instance, but combinations of wider data-types becomes tricky.
  • There is a 1024 column limit per table. This puts a hard stop on the number of document types we can stores in our example.
  • Efficient indexes are impossible. Even if the index is defined to only contain the columns representing the attributes for a particular document type, they would still have a row for every document, regardless of the document type.

Vertically partition
Vertically partition the document repository such that each document type has its own table.


Comments on this architecture:



  • This allows an effectively unlimited number of document types to be supported, with a larger number of attributes for each document type, and with much more efficient indexes.
  • However, any operation that needs to operate over the entire document repository has to join all the tables (e.g. select all documents with a last modification date in the last 7 days)

Property bag
Use a table where the per-document-type properties are stored in a LOB value (somtimes called a property bag).


Comments on this architecture:



  • This also allows an effectively unlimited number of document types and attributes.
  • Accessing attributes is very slow and expensive, requiring reading into an offset inside the LOB column.
  • Indexing over attributes is going to be very expensive – requiring a computed column (to extract the attribute from the LOB value) for each attribute to be indexed

XML
Define an XML column which effectively acts as a property bag.


Comments on this architecture:



  • This also allows an effectively unlimited number of document types and attributes.
  • Accessing attributes is faster than using a LOB property bag but slower than regular columns, and requires XQuery operations.
  • Indexing is possible using XML indexes, but they’re very space inefficient (a primary XML index shreds the entire XML column, and the XML column remains)

Basically – there’s no good way to do it in SQL Server 2005 or before.
 
Enter SQL Server 2008 with sparse columns.
 
A sparse column is a nullable column that’s declared with the SPARSE attribute. This means that when the value is null, it takes zero space – not even the single bit for the null bitmap is required – and this works even for fixed-length columns! The trade-off is that non-null sparse columns take an extra 4-bytes of space over regular columns. Here’s an example for INT columns:



  • Non-null regular INT column: 4 bytes
  • Null regular INT column: 4 bytes
  • Non-null sparse INT column: 8 bytes
  • Null regular INT column: 0 bytes

Books Online has a table showing the potential space savings for the various data types using sparse columns. You can get to this table by looking for ‘Sparse Columns’ in the SQL Server 2008 Books Online index.
 
In my document repository example above, declaring each of the per-document type attributes as SPARSE would allow each record to only store the attributes needed for the document it represents, rather than every defined column – a huge space saving!
 
But what about the limit on the number of columns? Well, SQL Server 2008 is also bumping the number of columns per table to 30000 (see Kimberly’s blog post from yesterday) – although not until the next CTP is available.
 
But how would indexing work? SQL Server 2008 has another new feature that helps here – filtered indexes. Conor’s blogged about these recently (see here).
 
Over the next few weeks I’ll post more on sparse columns – using them, comparisons with other schemas, and anything else I can come up with.