So I was playing today with the sparse column implementation in the latest SQL 2008 CTP, and I was specifically looking to see how this thing would work with the query processor. I wanted to know how much information is exposed for the QP to make smart decisions when this functionality is used in complex queries with cost-based plan choices. If the QP doesn’t have the information, then sometimes the query plans will be sub-optimal because, well, garbage-in garbage-out. While the SQL Server QP does a tremendous job at making complex plan choices compared to some of the other commercial alternatives, there are still limits on what the Optimizer can model in a reasonable amount of time. As such, there are seams where the product tends to not work as well as one would hope. This will always be true. While I suppose that will also keep me employable, it is useful to understand those limits because it will help you know where to look or, if it’s really hard, when to ask for help.
The SQL Server QP knows a couple of things about the data stored in a table in the storage engine:
1. How many physical pages it uses
2. How many rows it has in it (approximately)
3. Single-column statistics over a sample of the data
4. A basic notion of column interdependance to help in estimating queries with multiple scalar predicates.
From 1 and 2 it can derive the average row width. That’s useful for determining things like “how big will my sort be” if the query needs to sort. That’s a good thing – it leads to reasonable estimates for many choices in the QP.
So let’s add sparse columns into the mix. Sparse columns are useful for data with lots of NULLs. Often this is a result of a non-traditional third-normal form database problem or, perhaps someone who is not a database person not really trying to make something into a database problem early enough in its lifecycle. The point is that commercial database systems have a sweet spot around handling data sets with known (and small) sets of columns that can be stored in tables. There is a TON of expressiveness available in query processors that manipulate this data because this format of data is better supported than other formats.
None of this really means that your problem is going to easily fit into a nice third-normal form system. Often there are legacy or performance concerns that push an application away from that sweet spot. Over time, various technologies have tried to bridge that gap (property tables, XML, and object-relational mappings). Each of them have their own reasons to be, and I don’t want to get into them in depth in my post. I’m going to talk about how the QP deals with these from a modeling perspective.
I built two examples to explore how SQL Server 2008 reasons about sparse columns. One example creates lots of traditional, nullable float columns while the other is exactly the same except that it uses the sparse attribute.
A few things I learned immediately:
1. Sparse columns don’t change the maximum number of columns you can create in a table. On the surface, this seems unfortunate, since it will limit the kinds of applications that can use the feature.
2. It does seem to use less space per row. This isn’t hard, as the row format for SQL Server has a null bitmap and also needs 2 bytes per column to store the variable offset pointers.
create table sp1(aaa int) create table sp2(aaa int) declare @i int set @i=0 while (@i < 990) begin declare @sql nvarchar(400); declare @s nvarchar(20); set @s = @i; set @sql = 'alter table sp1 add col' + @s + ' float sparse' exec sp_executesql @sql set @i=@i+1 end declare @i int set @i=0 while (@i < 990) begin declare @sql nvarchar(400); declare @s nvarchar(20); set @s = @i; set @sql = 'alter table sp2 add col' + @s + ' float' exec sp_executesql @sql set @i=@i+1 end
declare @i int set @i=0 while @i < 20000 begin insert into sp1(col2) values (123.4) set @i=@i+1 end declare @i int set @i=0 while @i < 20000 begin insert into sp2(col2) values (123.4) set @i=@i+1 end
If we run “set statistics io on” and then run “select * from sp1″ and “select * from sp2″, you’d like to see some difference in IOs:
(20000 row(s) affected)
Table ‘sp1′. Scan count 1, logical reads 86, physical reads 0, read-ahead reads 80, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(20000 row(s) affected)
Table ‘sp2′. Scan count 1, logical reads 20000, physical reads 1, read-ahead reads 19978, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Well, that’s good – the sparse format on largely sparse data saves space. We can confirm that with a quick look into the system tables:
SELECT o.name AS table_name, au.type_desc, au.used_pages FROM sys.allocation_units AS au JOIN sys.partitions AS p ON au.container_id = p.partition_id JOIN sys.objects AS o ON p.object_id = o.object_id WHERE o.name in (N'sp1', N'sp2')
table_name type_desc used_pages
——————————————————————————————————————————– ———————————————————— ——————–
sp1 IN_ROW_DATA 87
sp1 ROW_OVERFLOW_DATA 0
sp2 IN_ROW_DATA 20001
(3 row(s) affected)
We’ve now confirmed that we actually do have fewer pages. This is also good.
Now let’s see how far into the QP this extends. Does the QP model the costs for these two queries differently?
SP1 TotalSubtreeCost: 0.08824496
SP2 TotalSubtreeCost: 14.83936
And that, my friends, is a “good thing”. This means that sparse columns are going to help your complex queries when you use a table with sparse columns in it. The easiest way to implement this is to simply ignore the new feature in the QP, and obviously someone did a good job to make sure that it was costed properly.
I don’t believe that there are additional statistical structures to tell the QP which things are on/off row. This will show up in a small number of scenarios (similar to how LOB data can be on/off row). This is outside of the model for how the QP reasons about plan cost, at least from what I’ve seen from SQL 2008 and from what was publicly said about 2005.