SQL Server 2008: Sparse columns and XML COLUMN_SET

We’re sitting here in St. Pete Beach in Florida visiting some of Kimberly’s family and having some sun-kissed R&R before heading back up to Seattle on Wednesday, and I thought I’d get the next post in my sparse columns mini-series out. Before I start though, Kimberly just posted the resources for the Accidental DBA class we taught at SQL Connections last week and in Iceland at the end of March – see here.

In my first post on sparse columns (see here) I introduced the concepts and explained why sparse columns are a useful feature. In this post I’m going to use the example I gave – a document repository with 50 document types and 20 unique attributes per document-type. Yes, it’s a contrived example, but scale it up be a factor of 100+ (think Sharepoint Server) and methods like normalization no longer apply.

I’m using a CTP-6 VPC on a Lenovo T60P laptop with 4GB, a dual-core 2.2GHz CPU, and the VPC is running off a 6200RPM drive. Your mileage may vary for run-times of the example scripts. The VPC is the one we gave out at SQL Connections and Iceland, and you can download the scripts and a VPC (maybe only CTP-5) from the Microsoft JumpStart site (see here for details).

The first test I’ll do is just creating the schema necessary to store the 1000+ columns of information in the test scenario. I’ll do one with sparse columns and one without:

— Create two tables, one with 1000 columns and one with 1000 columns but 997 sparse.
CREATE TABLE TableWithoutSparseColumns (
   DocID INT IDENTITY, DocName VARCHAR (100), DocType INT,
   c0004 INT NULL, c0005 INT NULL, c0006 INT NULL, c0007 INT NULL, c0008 INT NULL, c0009 INT NULL,
   c0994 INT NULL, c0995 INT NULL, c0996 INT NULL, c0997 INT NULL, c0998 INT NULL, c0999 INT NULL,
   c1000 INT NULL);

CREATE TABLE TableWithSparseColumns (
   DocID INT IDENTITY, DocName VARCHAR (100), DocType INT,
   c1000 INT SPARSE NULL);

I won’t list all the column names for the sake of brevity. Next I’ll insert some values into each table (the same values in each table):

— Insert a few rows in each
INSERT INTO TableWithSparseColumns (DocName, Doctype) VALUES (‘aaaa’, 1);
INSERT INTO TableWithSparseColumns (DocName, Doctype, c0945) VALUES (‘bbbb’, 2, 46);
INSERT INTO TableWithSparseColumns (DocName, Doctype, c0334) VALUES (‘cccc’, 3, 44);
INSERT INTO TableWithSparseColumns (DocName, Doctype, c0233, c0234) VALUES (‘dddd’, 4, 12, 34);
INSERT INTO TableWithSparseColumns (DocName, Doctype, c0233, c0234,c0235,c0236) VALUES (‘eeee’, 4, 12, 34, 46, 66);

INSERT INTO TableWithoutSparseColumns (DocName, Doctype) VALUES (‘aaaa’, 1);
INSERT INTO TableWithoutSparseColumns (DocName, Doctype, c0945) VALUES (‘bbbb’, 2, 46);
INSERT INTO TableWithoutSparseColumns (DocName, Doctype, c0334) VALUES (‘cccc’, 3, 44);
INSERT INTO TableWithoutSparseColumns (DocName, Doctype, c0233, c0234) VALUES (‘dddd’, 4, 12, 34);
INSERT INTO TableWithoutSparseColumns (DocName, Doctype, c0233, c0234,c0235,c0236) VALUES (‘eeee’, 4, 12, 34, 46, 66);

Now let’s see how big each table is:

— Now lets see how big the rows are
SELECT [avg_record_size_in_bytes], [page_count] FROM sys.dm_db_index_physical_stats (
   DB_ID (‘SparseColumnsTest’), OBJECT_ID (‘TableWithoutSparseColumns’), NULL, NULL, ‘DETAILED’);

SELECT [avg_record_size_in_bytes], [page_count] FROM sys.dm_db_index_physical_stats (
DB_ID (‘SparseColumnsTest’), OBJECT_ID (‘TableWithSparseColumns’), NULL, NULL, ‘DETAILED’);

avg_record_size_in_bytes page_count
———————— ——————–
4135                     5

(1 row(s) affected)

avg_record_size_in_bytes page_count
———————— ——————–
40.6                     1

(1 row(s) affected)

Ok – so that’s not a huge difference in page count (because we’ve only got 5 rows), but it’s a *massive* difference in average record size. Scaled up to hundreds of thousands or millions of records, the space savings will be astronomical!

Now let’s try selecting the data back using results-to-grid mode and a simple SELECT * statement. It takes 20 seconds to return – solely because the client still has to retrieve the metadata for 1000+ columns. Even though the columns are still defined as SPARSE, they show up in a SELECT * resultset, and that makes extracting out the non-NULL values pretty difficult…

Time for another new feature – column sets. There’s a new column type available for use with sparse columns – an XML COLUMN_SET. This is a column that is only materialized when selected, and will return all the non-NULL sparse columns in a row as an XML BLOB. It will also change the behavior of a SELECT * operation – removing all the sparse columns from the resultset and replacing them with itself, representing all the non-NULL sparse columns. Redefining our TableWithSparseColumns to have an XML COLUMN_SET column called SparseColumns (using the syntax ‘SparseColumns XML COLUMN_SET FOR ALL_SPARSE_COLUMNS‘), and re-inserting the same values then gives the following results for a SELECT * operation:

DocID   DocName   DocType   SparseColumns
——- ——— ——— —————
1       aaaa      1         NULL
2       bbbb      2         <c0945>46</c0945>
3       cccc      3         <c0334>44</c0334>
4       dddd      4         <c0233>12</c0233><c0234>34</c0234>
5       eeee      4         <c0233>12</c0233><c0234>34</c0234><c0235>46</c0235><c0236>66</c0236>

Pretty cool – and it returns virtually instantaneously (obviously scaling up to hundreds of thousands or millions of rows would take longer due to the time necessary to read the pages into the buffer pool). One downside is that the XML blob only returns the column name and value – not the datatype – but if your application can cope with that then not having to wade through hundreds (or thousands by RTM) of NULL columns values is great.

Next time I’ll discuss the internals of how sparse columns are stored.

Feedback on SQL Connections?

Today’s the final day of SQL Connections proper (tomorrow is post-conference workshops). We’ve really enjoyed the conference (as usual!) and it’s been a blast answering everyone’s questions. We’ll be posting resources over the next week or so (most likely on Kimberly’s blog) and we did a video interview with our good friends Richard and Carl (from DotNetRocks) which should be available for download soon.

We spoke with many conference attendees and one request was the ability to give online feedback. While the conference itself doesn’t have that ability (yet), we’d love to hear any feedback you have about the conference – the good, the bad, and the ugly! As we’re the co-chairs of the SQL conference, all the feedback comes to us anyway, and we’ll pass on any non-SQL feedback to the Connections organizers.

So – send feedback email to me with anything you’d like to share. If it’s about a specific session/speaker/topic, please try to give as much detail as you can. All feedback will be in strictest confidence.


Conference Questions Pot-Pourri #7: How to create Agent alerts

Many times I've been asked to do a blog post about creating Agent alerts, and given that today I demo'd it as part of our Accidental DBA workshop at Connections, it seemed a good time to do the blog post too!

I demo this in the context of alerting a DBA when an 823 or 824 IO error occurs. One of my early blog posts (see here) explains what these are, as well as providing a corrupt database that you can use to see these errors happening.

The idea is that I want to know as soon as an IO error occurs so I can start recovering and take preventative action to stop it happening again. I don't want to rely on users telling me when a query hits an IO error, and I don't want to have to scan the SQL error logs to find them. So I'm going to create an alert.

The first step is to fire up Management Studio and make sure SQL Server Agent is running. Next we need to make there's actually an Operator defined – so the new alert has someone to actually alert! – so we'll use the New Operator wizard (see below for how to get there).

In the New Operator Wizard that appears, I've created an operator named 'SysAdmin'. There are three Notification Options you can use – email, net send, and pager. I've setup SysAdmin to use net send to my local machine. You need to make sure the Messenger service is enabled otherwise net send will not work. Also, be aware the net sends will NOT work unless the machine has a network connection – even if the net send source and destination are the same machine! Given the various issues with net send, it's better to use email or pager alerts – but for the purposes of this blog post its the easiest option.

Now let's create the new alert – using the New Alert wizard (see the below for how to get there).

In the New Alert Wizard that appears, I've created an alert named 'IO Errors' for all severity 24 errors on all databases. Below is a portion of the General tab of the wizard showing these settings:

I also need to specify what happens. In the Response tab of the wizard I've set the SysAdmin operator to be notified using net send. Again, see below.

In the Options tab I've checked the box to include the error text in the net send.

Now let's test it. Using the database called 'broken' that I provide as an example (see here), I'll force an IO error to occur. In my query window I get:

SELECT * from broken..brokentable;

Msg 824, Level 24, State 2, Line 1

SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x7232c940; actual: 0x720e4940). It occurred during a read of page (1:143) in database ID 10 at offset 0x0000000011e000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\broken.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

And a few seconds later I get the net send:

Pretty cool!

You can also use the WMI Provider to do this – see Creating a SQL Server Agent Alert by Using the WMI Provider for Server Events.