SQL Server 2008: Sparse Columns

 

It’s been quite a while since I wrote a blog post on a flight but I happened to be playing with a CTP-6 VPC on the way down to Orlando and thought I’d do the first in a few posts on a cool feature of SQL Server 2008 – Sparse Columns.

One problem in database schema design is how to store heterogenous data types with many properties efficiently. Here’s an example (contrived) scenario – consider a document repository that can store up to 50 different kinds of documents, with a a few common properties (like document type, document name, last modification time) and 20 totally different attributes for each column type. The document repository needs to store the common fields, plus the per-document-type attributes for each document.

[Edit: I had several comments – thank you – that this example is too contrived and that normalization could give some benefit to it – ok- the example is *very* simple to illustrate the concepts involved. Now imagine the same example with thousands of document types and thousands of user-defined attributes per document – normalization is no longer applicable. This is the Sharepoint Server scenario that drives this feature.]

What are some of the options for this in SQL Server 2005?

Single table
Define a table with 1000+ columns to have allow all the document types to be stored in a single table. The first 20 columns could be for the common properties and then each subsequent set of columns stores the attributes for a single document type (e.g. columns 20-39 store the attributes for documents of type 1, columns 40-59 store the attributes for documents of type 2, and so on).

Comments on this architecture:

  • There is a huge amount of wasted space for each table row – as only a maximum of 40 columns (common fields plus per-document-type attributes) will have values in each 1000+ column record. Even if all the attributes are stored as nullable variable length columns (e.g. SQLVARIANT) then there’s still a minimum of 1-bit of storage required per column (for the null bitmap entry). There’s also the CPU overhead of having to crack the SQLVARIANT columns, and the storage overhead of having them in the first place.
  • The 8060 byte record size limit effectively limits the number and datatypes of columns that can be defined per record. You could easily have 1000 4-byte INT columns per record, for instance, but combinations of wider data-types becomes tricky.
  • There is a 1024 column limit per table. This puts a hard stop on the number of document types we can stores in our example.
  • Efficient indexes are impossible. Even if the index is defined to only contain the columns representing the attributes for a particular document type, they would still have a row for every document, regardless of the document type.

Vertically partition
Vertically partition the document repository such that each document type has its own table.

Comments on this architecture:

  • This allows an effectively unlimited number of document types to be supported, with a larger number of attributes for each document type, and with much more efficient indexes.
  • However, any operation that needs to operate over the entire document repository has to join all the tables (e.g. select all documents with a last modification date in the last 7 days)

Property bag
Use a table where the per-document-type properties are stored in a LOB value (somtimes called a property bag).

Comments on this architecture:

  • This also allows an effectively unlimited number of document types and attributes.
  • Accessing attributes is very slow and expensive, requiring reading into an offset inside the LOB column.
  • Indexing over attributes is going to be very expensive – requiring a computed column (to extract the attribute from the LOB value) for each attribute to be indexed

XML
Define an XML column which effectively acts as a property bag.

Comments on this architecture:

  • This also allows an effectively unlimited number of document types and attributes.
  • Accessing attributes is faster than using a LOB property bag but slower than regular columns, and requires XQuery operations.
  • Indexing is possible using XML indexes, but they’re very space inefficient (a primary XML index shreds the entire XML column, and the XML column remains)

Basically – there’s no good way to do it in SQL Server 2005 or before.

Enter SQL Server 2008 with sparse columns.

A sparse column is a nullable column that’s declared with the SPARSE attribute. This means that when the value is null, it takes zero space – not even the single bit for the null bitmap is required – and this works even for fixed-length columns! The trade-off is that non-null sparse columns take an extra 4-bytes of space over regular columns. Here’s an example for INT columns:

  • Non-null regular INT column: 4 bytes
  • Null regular INT column: 4 bytes
  • Non-null sparse INT column: 8 bytes
  • Null sparse INT column: 0 bytes

Books Online has a table showing the potential space savings for the various data types using sparse columns. You can get to this table by looking for ‘Sparse Columns’ in the SQL Server 2008 Books Online index.

In my document repository example above, declaring each of the per-document type attributes as SPARSE would allow each record to only store the attributes needed for the document it represents, rather than every defined column – a huge space saving!

But what about the limit on the number of columns? Well, SQL Server 2008 is also bumping the number of columns per table to 30000 (see Kimberly’s blog post from yesterday) – although not until the next CTP is available.

But how would indexing work? SQL Server 2008 has another new feature that helps here – filtered indexes. Conor’s blogged about these recently (see here).

Over the next few weeks I’ll post more on sparse columns – using them, comparisons with other schemas, and anything else I can come up with.

 

2 thoughts on “SQL Server 2008: Sparse Columns

  1. Calling this scenario "contrived" is appropriate. I missed the part where it said that normalization was forbidden… What’s wrong with storing the common properties (the ones that will make sense for all types) in one table, and the remaining properties in separate tables? You discuss this solution but then dismiss it with "any operation that needs to operate over the entire document repository has to join all the tables". That’s only if you don’t normalize the design at all. Even queries that need properties specific to the types can use efficient joins where all the irrelevant tables are weeded out by a type field (or you could use dynamic SQL).

    Even if sparse columns allow you to store such a design in a single table, it’s hardly desirable — it’s nice if 1000+ columns don’t take up storage space in rows, but just retrieving the table’s metadata would become a chore, and I shudder to think what ORM tools would do with it. Column naming could also get quite contrived.

    Now, sparse columns and the increase on the (quite reasonable!) column limit make me think that there’s apparently a desire for extremely unnormalized solutions like this, and I’m by no means a DBA with years of experience (just a lowly application programmer) but I would definitely advise people to look hard at their problems if they’re going for solutions like this.

  2. Jeroen:
    I’m no DBA either, but Paul mentioned the one scenario I know where normalization is dumped through the window due to performance requirements – Sharepoint Server. Sharepoint can store millions of documents in the database, and each document can have a different set of properties. Each document library and each content-type has its own set of properties, and we can have millions of doclibs, as created by users. Creating a table for each is simply unfeasible. So the Sharepoint solution (as far as I could get by browsing the database – no internal info here) is to have the shared properties as named columns, and any doclib-specific properties for a given document stored in a set of extended attribute columns, named by type: ‘integer1’, ‘integer2’, ‘text1’, etc.
    I was appalled when I first saw it, but have since understood the necessity. Normalization takes second place to performance when we’re talking about these scopes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Other articles

Imagine feeling confident enough to handle whatever your database throws at you.

With training and consulting from SQLskills, you’ll be able to solve big problems, elevate your team’s capacity, and take control of your data career.